library(data.table) # data.table type
library(rcompanion) # plothistogram()
library(tidyr) # gather()
library(kableExtra) # kables for html document
library(naniar) # vis_miss()
library(corrplot) # corrplot()
library(caret) # findCorrelation()
library(REdaS) # KMOS(), bart_spher()
library(psych) # KMO(), fa(), principal()
library(lavaan)
library(semPlot) # for visualization
library(psy)
library(lavaanPlot)
We load the data files.
descriptions <- fread("Variables and Labels_Galeries Lafayette.csv")
df <- read.csv("Case Study III_Structural Equation Modeling.csv")
We investigate missing values in the data, we know that missing values have been coded as “999” in the dataset:
# replacing 999 by NA values
df1 <- df
df1[df1 == 999] <- NA
# separating dataframe only with the questionnaire data
df2 <- df1[,c(1:22)]
# proportion of rows containing NA values
1 - nrow(na.omit(df2)) / nrow(df2)
## [1] 0.3037975
# deleting rows with NAs
df2 <- na.omit(df2)
nrow(df2)
## [1] 385
As it was instructed, we delete the rows containing missing values. This represents 30% of our dataset, which is quite a high amount. We are left with 385 rows
We first plot the variables to have a sense of how client perception is skewed.
# Plot the variables
par(mfrow = c(3, 3))
for (col in c(1:ncol(df2))){
plotNormalHistogram(df2[col],main = paste("Frequency Distribution of Im", col))
}
rm(col)
For most questions the distribution is skewed to the left, answers leaned towards the “does apply completely” direction. We observe that for question number 7 asking whether Galeries Lafayette Berlin embody French Savoir-vivre for the client, none has answered “not at all”, because the range of the distribution starts from 2. Question 10 about whether the Galeries represent gourmet food has also received no completely negative answers.
We run a correlation matrix so to see if we can individuate some groupings in our variables.
raqMatrix <- cor(df2)
corrplot(raqMatrix, order = 'hclust', tl.cex = 0.8, addrect = 10)
The clusters that form just by observing correlation between variable are a first suggestions of what kind of factors we can identify. We check the description files to see if the clusters are thematically coherent:
Im8, Im10, Im14: They all
regard food and the quality of food (French cuisine, gourmet
food);
Im6, Im7: French Lifestyle;
Im12, Im13: Luxury goods and
brands;
Im20, Im21, Im22: They are
about the feeling of the shopping experience, so whether the customer
feels at ease;
Im16, Im19: They are about the
perception of professionalism;
Im15, Im1, Im2: They cover
the brand/product assortment;
Im18, Im17: Describe how on trend the
Galeries are perceived to be;
Im5, Im3, Im4: Measure how
the client perceives the decorations and arrangement of the shopping
areas.
Variables Im9 and Im11 seem to stand on
their own in terms of correlations, they concerns French fashion and
cosmetics respectively. Im9 in particular has no strong
correlation with any other variable.
We display the highest correlation any variable has with variables except itself.
var_cors <- apply(upper.tri(raqMatrix)*raqMatrix + lower.tri(raqMatrix)*raqMatrix,2,max)
ggplot(mapping = aes(x = reorder(names(var_cors), var_cors), y = var_cors)) +
geom_col() +
theme_classic() +
scale_y_continuous(limits = c(0,1), breaks = seq(0, 1, 0.1)) +
labs(x = "Variable",
y = "Highest correlation",
title = "Highest correlation of variables with other variables")
rm(var_cors)
Clearly, there are two considerable jumps between Im9,
Im11 and the rest of variables. We need to look out for
these two variables and assess whether they should be included in our
model or not.
We compute the Kaiser-Meyer-Olkin test.
# Kaiser factor adequacy
KMO(df2)
## Kaiser-Meyer-Olkin factor adequacy
## Call: KMO(r = df2)
## Overall MSA = 0.88
## MSA for each item =
## Im1 Im2 Im3 Im4 Im5 Im6 Im7 Im8 Im9 Im10 Im11 Im12 Im13 Im14 Im15 Im16
## 0.82 0.82 0.86 0.85 0.95 0.82 0.84 0.93 0.94 0.83 0.91 0.88 0.87 0.83 0.96 0.91
## Im17 Im18 Im19 Im20 Im21 Im22
## 0.86 0.86 0.94 0.83 0.91 0.88
The overall Measure of Sampling Adequacy is 0.88. The variables seem
adequate enough for factor analysis. According to Kaiser, a MSA in the
.80s is “meritorious”. The lowest MSA is achieved by Im2,
but still it is a quite high value so we should not worry too
much.Moreover, the correlation grouping it formed with Im15
and I`m1 had a strong common theme.Im9andIm11`
both have high MSA despite them standing on their own in the correlation
matrix, so it is reassuring.
We plot a bar chart of the unique variance of each variable.
anti_mat_diag = data.frame("Question" = 1:22,
"UniqueVar" = diag(KMO(df2)$ImCov))
ggplot(data = anti_mat_diag, aes(x = Question, y = UniqueVar)) +
geom_col() +
theme_classic() +
scale_x_continuous(breaks = seq(1, 22, 1)) +
scale_y_continuous(limits = c(0,1)) +
labs(title = "Diagonal values of anti-image correlation matrix",
y = "Proportion of unique variance")
There doesn’t seem to be very big outliers, but Im9 and
Im11 do stand out. We saw how those questions were unique
in terms of the matter addressed, so the questionnaire definitely could
be improved by going deeper into the french fashion and french cosmetics
subjects. This is also important considering that marketing-wise these
two subjects are very high selling points especially for tourists who
might come and look for something more authentic and identifiable as
being from Paris.
We use Bartlett’s test of sphericity.
# Barlett's Test of Sphericity
bart_spher(df2)
## Bartlett's Test of Sphericity
##
## Call: bart_spher(x = df2)
##
## X2 = 6451.238
## df = 231
## p-value < 2.22e-16
We are testing whether the sample of data we have stems from a population of uncorrelated variables, meaning the correlation matrix is an identity matrix (diagonal of 1s). This is the null hypothesis. Since we get a very small \(p\)-value under 0.05, the null hypothesis can be rejected, meaning that the correlation matrix is not an identity matrix and there are strong correlations among the variables.
We explore models with 1 to 10 factors, as adding more factors results in an error. And plot some key measures which asses how well the model represents the underlying structure of our dataset. For clarity, we describe the criteria used below, to facilitaty comprehension:
fit, which measures how well the factor model reproduces the correlation matrix;
objective, which is the value of the function that is minimized by a maximum likelihood procedure;
crms, which is the sum of the squared off-diagonal residuals divided by the degrees of freedom adjusted for degrees of freedom;
RMSEA, which is the root mean squared error of approximation;
TLI, which is the Tucker Lewis Index of factoring reliability;
BIC, which is the Bayesian information criterion.
# we initiate and empty dataframe which will record our criteria values
fit_df <- matrix(nrow = 10, ncol = 6)
colnames(fit_df) <- c("fit", "objective", "RMSEA", "crms", "TLI", "BIC")
fit_df <- as.data.frame(fit_df)
# we compute the factor analysis for nfactors
for (i in 1:10) {
FA <- fa(df2, nfactors = i, rotate = "varimax", fm = "pa")
fit_df[i, 1] <- FA$fit
fit_df[i, 2] <- FA$objective
fit_df[i, 3] <- FA$RMSEA[1]
fit_df[i, 4] <- FA$crms
fit_df[i, 5] <- FA$TLI
fit_df[i, 6] <- FA$BIC
}
fit_df %>% gather() %>%
ggplot(aes(x = rep(1:10, ncol(fit_df)), y = value)) +
facet_wrap(~ key, scales = "free") +
geom_point() +
theme_classic() +
scale_x_continuous(breaks = seq(1, 10, 1)) +
geom_vline(xintercept = 8, linetype = "dashed") +
geom_vline(xintercept = 9, linetype = "dashed") +
labs(title = "Various criteria for Principal Axis Factoring",
x = "Number of factors",
y = "")
rm(FA, fit_df, i)
It seems that the number of factors we want to try and fit is 8 or 9.
We start fitting our first models, of course selecting our final model will be a process of trial and error.
We try models with 8 or 9 factors and with or without
Im9 and Im11.
# including all images
FA9 <- fa(df2, nfactors = 9, rotate = "varimax", fm = "pa")
FA8 <- fa(df2, nfactors = 8, rotate = "varimax", fm = "pa")
# without images 9 an 11
FA9_119 <- fa(df2[, -c(9, 11)], nfactors = 9, rotate = "varimax", fm = "pa")
FA8_119 <- fa(df2[, -c(9, 11)], nfactors = 8, rotate = "varimax", fm = "pa")
We print the loadings of these models as heatmaps:
par(mfrow = c(2,2))
corrplot(t(FA8$loadings),
tl.cex = 0.7,
title = "PAF with 8 factors \n Loadings",
mar = c(0, 1, 3, 0))
corrplot(t(FA8_119$loadings),
tl.cex = 0.7,
title = "PAF with 8 factors excluding im9 and im11 \n Loadings",
mar = c(0, 1, 3, 0))
corrplot(t(FA9$loadings),
tl.cex = 0.7,
title = "PAF with 9 factors \n Loadings",
mar = c(0, 1, 3, 0))
corrplot(t(FA9_119$loadings),
tl.cex = 0.7,
title = "PAF with 9 factors excluding im9 and im11 \n Loadings",
mar = c(0, 1, 3, 0))
When plotting the loadings of all the models we estimated, we notice
that for the two full models, Im9 has no strong loadings on
any factor. Im11 seems to be loaded on factor 4, which is
the construct related to luxury. As Im11 mentions high
quality cosmetics, this kind of products can be related to the luxury
milieu.
When looking at the factors, we see that both models with 9 factors have no strong loadings on factor 9, which indicates that models with 8 are probably more suited for our data.
We also notice that Im8, Im15 and
Im19 seem to have difficulties, they are partially loaded
onto multiple factors or not loaded very strongly on one factor. We have
to investigate these questions further. To do so, we look at their
descriptions.
descriptions[c(8,15,19),]
## Variable
## 1: Im8
## 2: Im15
## 3: Im19
## Label
## 1: What do GLB represent from your point of view? Expertise in French Traditional Cuisine
## 2: What do GLB represent from your point of view? Professional Selection of Brands
## 3: What do GLB represent from your point of view? Professional Organization
Im8 assesses the “Expertise in French traditional
cuisine”. Thus, it loads both on the construct we called French
lifestyle and onto Food.Im15 assesses the “Professional selection of brands”.
Thus, it loads on both Product assortment and
Professionalism.Im19 assesses whether Galeries Lafayette are a
“Professional organisation”. This is under the Professionalism
construct as well, but its loading is overall weaker.We investigate what the loadings look like in case we only take out
the Im9:
FA8_9 <- fa(df2[, -c(9)], nfactors = 8, rotate = "varimax", fm = "pa")
FA9_9 <- fa(df2[, -c(9)], nfactors = 9, rotate = "varimax", fm = "pa")
par(mfrow = c(2,1))
corrplot(t(FA8_9$loadings),
tl.cex = 0.7,
title = "Loadings of PAF with 8 factors excluding im9",
mar = c(0, 1, 3, 0))
corrplot(t(FA9_9$loadings),
tl.cex = 0.7,
title = "Loadings of PAF with 9 factors excluding im9",
mar = c(0, 1, 3, 0))
par(mfrow = c(1,1))
The performance of Im11 in terms of loadings on factors
does not improve, which conceptually makes sense because the 2 variables
describe something quite different.
Again, the model with 9 factors does not have any strong loading on
its last factor. Therefore, we decide to continue the analysis for now
by comparing only with our models which have 8 factors and which exclude
at least Im9, i.e. FA8_9 and
FA8_119. All other models will be dropped from further
analysis.
rm(FA8, FA9, FA9_9, FA9_119)
We display the loadings of our two remaining models numerically, with a cutoff at 0.3.
print(FA8_9$loadings, cutoff = 0.3, sort = TRUE)
##
## Loadings:
## PA1 PA3 PA2 PA4 PA5 PA7 PA6 PA8
## Im3 0.812
## Im4 0.893
## Im5 0.650
## Im20 0.860
## Im21 0.728
## Im22 0.786
## Im8 0.599 0.520
## Im10 0.887
## Im14 0.823
## Im11 0.560
## Im12 0.878
## Im13 0.721
## Im1 0.870
## Im2 0.828
## Im6 0.816
## Im7 0.318 0.838
## Im17 0.824
## Im18 0.747
## Im16 0.765
## Im19 0.302 0.546
## Im15 0.467 0.391
##
## PA1 PA3 PA2 PA4 PA5 PA7 PA6 PA8
## SS loadings 2.471 2.346 2.213 2.150 2.101 1.928 1.634 1.354
## Proportion Var 0.118 0.112 0.105 0.102 0.100 0.092 0.078 0.064
## Cumulative Var 0.118 0.229 0.335 0.437 0.537 0.629 0.707 0.771
print(FA8_119$loadings, cutoff = 0.3, sort = TRUE)
##
## Loadings:
## PA4 PA3 PA2 PA1 PA6 PA5 PA7 PA8
## Im3 0.816
## Im4 0.894
## Im5 0.652
## Im20 0.865
## Im21 0.731
## Im22 0.787
## Im8 0.605 0.518
## Im10 0.889
## Im14 0.840
## Im1 0.866
## Im2 0.837
## Im6 0.802
## Im7 0.321 0.848
## Im12 0.747
## Im13 0.812
## Im17 0.811
## Im18 0.765
## Im16 0.781
## Im19 0.305 0.533
## Im15 0.468 0.384
##
## PA4 PA3 PA2 PA1 PA6 PA5 PA7 PA8
## SS loadings 2.487 2.360 2.239 2.109 1.913 1.649 1.640 1.350
## Proportion Var 0.124 0.118 0.112 0.105 0.096 0.082 0.082 0.067
## Cumulative Var 0.124 0.242 0.354 0.460 0.555 0.638 0.720 0.787
The two models are quite similar, but we can see that the loading
associated to variable Im11 inside FA8_9 is
under 0.6. Moreover, the portion of variance explained by the model is
0.771 for FA8_9 and 0.787 for FA8_119. Thus,
FA8_119 seems to be a better option.
The three variables mentioned before are still loading onto multiple
factors: - Im8 is loading onto Food and French Lifestyle
constructs like before; - Im15 is loading onto Product
Assortment and Professionalism like before; - Im19 is
loading onto Store Arrangement and Professionalism; - Im7
which is part of the French Lifestyle construct, is loading onto the
Store Arrangement construct as well, but the difference in magnitude of
the loadings is important, unlike with the other three variables above.
It maintains a strong attachment to its designated factor.
We look at the description of question Im7.
descriptions[7,]
## Variable Label
## 1: Im7 What do GLB represent from your point of view? French Savoir-vivre
There does not seem to be a particular logical connection between
French Savoir-vivre and the the Store Arrangement construct. However, as
we have mentioned this is not a huge concern, as Im7 is
strongly anchored to the French Lifestyle construct. Regarding
Im8, Im15 and Im19, we will have
to test models including or excluding them.
FA8_119We display the scree plot for FA8_119.
# Scree Plot FA8_119
ggplot(mapping = aes(x = 1:length(FA8_119$values),
y = FA8_119$values,
color = (FA8_119$values >= 1))) +
geom_point() +
geom_hline(yintercept = 1, linetype = "dashed") +
theme_classic() +
labs(title = "Scree plot of FA8_119",
x = "Factor Number",
y = "Eigenvalue",
color = "Eigenvalue >= 1")
This plot serves the purpose of determining the number of factors to retain in the factor axis analysis. Factors’ eigenvalues are ordered in a descending order. This method is quite subjective but it might provide provide further insight. The elbow rule suggests to cut off the number of factors at 8, which is coherent with what we have seen. However, the Kaiser-Guttman criterion, which indicates to select only factors with eigenvalues greater than 1, would suggest here a model with only five factors. Give the natural groupings we have considered above, this does not seem like an appropriate description of our data’s structure. However, this model will still be explored in the following section.
We run a model with only 5 factors and without Im9 and
Im11, as the scree plot considered above which suggests the
5 factors model did not include those two items.
FA5_119 <- fa(df2[, -c(9,11)], nfactors = 5, rotate = "varimax", fm = "pa")
print(FA5_119$loadings, cutoff = 0.3, sort = TRUE)
##
## Loadings:
## PA2 PA4 PA5 PA1 PA3
## Im6 0.638
## Im7 0.757
## Im8 0.851
## Im10 0.774
## Im14 0.783
## Im3 0.828
## Im4 0.870
## Im5 0.635
## Im12 0.621
## Im13 0.712
## Im17 0.696
## Im18 0.623
## Im1 0.842
## Im2 0.834
## Im15 0.389 0.561
## Im20 0.792
## Im21 0.741
## Im22 0.806
## Im16 0.361 0.448
## Im19 0.378 0.349 0.415
##
## PA2 PA4 PA5 PA1 PA3
## SS loadings 3.356 2.582 2.564 2.514 2.293
## Proportion Var 0.168 0.129 0.128 0.126 0.115
## Cumulative Var 0.168 0.297 0.425 0.551 0.665
The cumulative explained variance of the model decreases. Moreover,
factors such as French Lifestyle and Food are merged together, while
questions Im16 and Im19 have very split
loadings between two or three factors. Obviously, this model is not
appropriate for our data.
rm(FA8_9, FA5_119)
Thus, we drop this possible type of model to concentrate on the
analysis of variables Im8, Im15 and
Im19.
Im8, Im15 and
Im19.We remind the reader that these three variables in the
FA8_119 model had split loadings onto were loadings on
factors that make logical sense, but the values of loadings were quite
close to the 0.3 cut-off.
We explore all combinations of models with 8 factors and with or without the three items of interest.
FA8_119_81519 <- fa(df2[, -c(9,11,8,15,19)], nfactors = 8, rotate = "varimax", fm = "pa")
FA8_119_815 <- fa(df2[, -c(9,11,8,15)], nfactors = 8, rotate = "varimax", fm = "pa")
FA8_119_819 <- fa(df2[, -c(9,11,8,19)], nfactors = 8, rotate = "varimax", fm = "pa")
FA8_119_1519 <- fa(df2[, -c(9,11,15,19)], nfactors = 8, rotate = "varimax", fm = "pa")
FA8_119_8 <- fa(df2[, -c(9,11,8)], nfactors = 8, rotate = "varimax", fm = "pa")
FA8_119_15 <- fa(df2[, -c(9,11,15)], nfactors = 8, rotate = "varimax", fm = "pa")
FA8_119_19 <- fa(df2[, -c(9,11,19)], nfactors = 8, rotate = "varimax", fm = "pa")
We plot the loadings and print them numerically for
FA8_119_8, FA8_119_15 and
FA8_119_19.
par(mfrow = c(3,1))
corrplot(t(FA8_119_8$loadings),
tl.cex = 0.7,
title = "Loadings of PAF with 8 factors excluding Images 9, 11 and 8",
mar = c(0, 1, 3, 0))
corrplot(t(FA8_119_15$loadings),
tl.cex = 0.7,
title = "Loadings of PAF with 8 factors excluding Images 9, 11 and 15",
mar = c(0, 1, 3, 0))
corrplot(t(FA8_119_19$loadings),
tl.cex = 0.7,
title = "Loadings of PAF with 8 factors excluding Images 9, 11 and 19",
mar = c(0, 1, 3, 0))
par(mfrow = c(1,1))
print(FA8_119_8$loadings, cutoff = 0.3, sort = TRUE)
##
## Loadings:
## PA4 PA3 PA1 PA2 PA6 PA5 PA7 PA8
## Im3 0.816
## Im4 0.895
## Im5 0.652
## Im20 0.864
## Im21 0.731
## Im22 0.788
## Im1 0.875
## Im2 0.827
## Im10 0.901
## Im14 0.807
## Im6 0.855
## Im7 0.308 0.808
## Im12 0.745
## Im13 0.817
## Im17 0.829
## Im18 0.749
## Im16 0.791
## Im19 0.305 0.542
## Im15 0.468 0.385
##
## PA4 PA3 PA1 PA2 PA6 PA5 PA7 PA8
## SS loadings 2.479 2.351 2.099 1.784 1.678 1.652 1.641 1.363
## Proportion Var 0.130 0.124 0.110 0.094 0.088 0.087 0.086 0.072
## Cumulative Var 0.130 0.254 0.365 0.459 0.547 0.634 0.720 0.792
print(FA8_119_15$loadings, cutoff = 0.3, sort = TRUE)
##
## Loadings:
## PA1 PA3 PA2 PA6 PA5 PA4 PA7 PA8
## Im3 0.819
## Im4 0.895
## Im5 0.653
## Im20 0.870
## Im21 0.730
## Im22 0.788
## Im8 0.608 0.516
## Im10 0.895
## Im14 0.834
## Im6 0.807
## Im7 0.324 0.845
## Im1 0.885
## Im2 0.803
## Im17 0.820
## Im18 0.760
## Im12 0.705
## Im13 0.861
## Im16 0.757
## Im19 0.306 0.553
##
## PA1 PA3 PA2 PA6 PA5 PA4 PA7 PA8
## SS loadings 2.465 2.315 2.235 1.890 1.808 1.623 1.593 1.179
## Proportion Var 0.130 0.122 0.118 0.099 0.095 0.085 0.084 0.062
## Cumulative Var 0.130 0.252 0.369 0.469 0.564 0.649 0.733 0.795
print(FA8_119_19$loadings, cutoff = 0.3, sort = TRUE)
##
## Loadings:
## PA4 PA3 PA2 PA1 PA6 PA5 PA7 PA8
## Im3 0.824
## Im4 0.902
## Im5 0.654
## Im20 0.864
## Im21 0.733
## Im22 0.788
## Im8 0.612 0.518
## Im10 0.886
## Im14 0.849
## Im1 0.872
## Im2 0.846
## Im6 0.804
## Im7 0.323 0.847
## Im12 0.740
## Im13 0.824
## Im17 0.808
## Im18 0.776
## Im16 0.309 0.589
## Im15 0.486 0.399
##
## PA4 PA3 PA2 PA1 PA6 PA5 PA7 PA8
## SS loadings 2.476 2.342 2.246 2.136 1.896 1.637 1.621 0.661
## Proportion Var 0.130 0.123 0.118 0.112 0.100 0.086 0.085 0.035
## Cumulative Var 0.130 0.254 0.372 0.484 0.584 0.670 0.755 0.790
All three models have a total explained variance which has improved
when compared to FA8_119. The biggest improvement is when
removing Im15. However, we see that in all three models,
the two remaining variables of interest are still loading between
various factors, with no remarkable attachment to a particular one.
Thus, we probably will get more satisfying results when looking at
models excluding two variables.
We plot the loadings and print them numerically for
FA8_119_815, FA8_119_819 and
FA8_119_1519.
par(mfrow = c(3,1))
corrplot(t(FA8_119_815$loadings),
tl.cex = 0.7,
title = "Loadings of PAF with 8 factors excluding Images 9, 11, 8 and 15",
mar = c(0, 1, 3, 0))
corrplot(t(FA8_119_819$loadings),
tl.cex = 0.7,
title = "Loadings of PAF with 8 factors excluding Images 9, 11, 8 and 19",
mar = c(0, 1, 3, 0))
corrplot(t(FA8_119_1519$loadings),
tl.cex = 0.7,
title = "Loadings of PAF with 8 factors excluding Images 9, 11, 15 and 19",
mar = c(0, 1, 3, 0))
par(mfrow = c(1,1))
print(FA8_119_815$loadings, cutoff = 0.3, sort = TRUE)
##
## Loadings:
## PA1 PA3 PA5 PA2 PA6 PA4 PA7 PA8
## Im3 0.816
## Im4 0.894
## Im5 0.655
## Im20 0.869
## Im21 0.731
## Im22 0.790
## Im1 0.896
## Im2 0.791
## Im10 0.931
## Im14 0.781
## Im6 0.885
## Im7 0.317 0.782
## Im17 0.828
## Im18 0.751
## Im12 0.706
## Im13 0.862
## Im16 0.696
## Im19 0.622
##
## PA1 PA3 PA5 PA2 PA6 PA4 PA7 PA8
## SS loadings 2.439 2.306 1.797 1.789 1.667 1.611 1.594 1.193
## Proportion Var 0.135 0.128 0.100 0.099 0.093 0.089 0.089 0.066
## Cumulative Var 0.135 0.264 0.363 0.463 0.555 0.645 0.733 0.800
print(FA8_119_819$loadings, cutoff = 0.3, sort = TRUE)
##
## Loadings:
## PA4 PA3 PA1 PA2 PA6 PA5 PA7 PA8
## Im3 0.824
## Im4 0.903
## Im5 0.656
## Im20 0.864
## Im21 0.733
## Im22 0.789
## Im1 0.897
## Im2 0.821
## Im10 0.919
## Im14 0.799
## Im6 0.856
## Im7 0.308 0.808
## Im12 0.732
## Im13 0.835
## Im17 0.824
## Im18 0.761
## Im16 0.303 0.601
## Im15 0.481 0.408
##
## PA4 PA3 PA1 PA2 PA6 PA5 PA7 PA8
## SS loadings 2.470 2.333 2.120 1.789 1.669 1.644 1.621 0.666
## Proportion Var 0.137 0.130 0.118 0.099 0.093 0.091 0.090 0.037
## Cumulative Var 0.137 0.267 0.385 0.484 0.577 0.668 0.758 0.795
print(FA8_119_1519$loadings, cutoff = 0.3, sort = TRUE)
##
## Loadings:
## PA1 PA3 PA2 PA6 PA5 PA4 PA7 PA8
## Im3 0.828
## Im4 0.904
## Im5 0.656
## Im20 0.870
## Im21 0.732
## Im22 0.788
## Im8 0.614 0.520
## Im10 0.888
## Im14 0.847
## Im6 0.806
## Im7 0.324 0.846
## Im1 0.874
## Im2 0.832
## Im17 0.827
## Im18 0.763
## Im12 0.718
## Im13 0.851
## Im16 0.305 0.579
##
## PA1 PA3 PA2 PA6 PA5 PA4 PA7 PA8
## SS loadings 2.462 2.296 2.237 1.877 1.829 1.609 1.573 0.459
## Proportion Var 0.137 0.128 0.124 0.104 0.102 0.089 0.087 0.025
## Cumulative Var 0.137 0.264 0.389 0.493 0.594 0.684 0.771 0.797
Here, model FA8_119_815 outperforms the other two. Its
explained variance is the highest we have seen yet, at 0.800, and the
loadings are clear for all variables. This is the first instance of
variable Im19 having a loading which is above the usual 0.6
cutoff. This is a good sign. The other two models still exhibits split
loadings for the three problematic variables.
corrplot(t(FA8_119_81519$loadings),
tl.cex = 0.7,
title = "Loadings of PAF with 8 factors excluding Images 9, 11, 8, 15 and 19",
mar = c(0, 1, 3, 0))
print(FA8_119_81519$loadings, cutoff = 0.3, sort = TRUE)
##
## Loadings:
## PA1 PA3 PA5 PA2 PA6 PA4 PA7 PA8
## Im3 0.829
## Im4 0.906
## Im5 0.659
## Im20 0.871
## Im21 0.732
## Im22 0.789
## Im1 0.903
## Im2 0.807
## Im10 0.934
## Im14 0.789
## Im6 0.887
## Im7 0.320 0.780
## Im17 0.840
## Im18 0.752
## Im12 0.724
## Im13 0.848
## Im16 0.312 0.304 0.546
##
## PA1 PA3 PA5 PA2 PA6 PA4 PA7 PA8
## SS loadings 2.465 2.288 1.842 1.802 1.651 1.612 1.575 0.391
## Proportion Var 0.145 0.135 0.108 0.106 0.097 0.095 0.093 0.023
## Cumulative Var 0.145 0.280 0.388 0.494 0.591 0.686 0.779 0.801
When excluding all three variables, we see that Im16
becomes loaded by itself onto one factor, and its loading is split
between three factors. This is due to the fact that Im19 is
essential to form a factor together with Im16. The total
explained variance is very slightly higher than before, but a difference
of 0.001 is not a good trade-off to accepting split loadings in
Im16.
As we have shown, the best model we have found is
FA8_119_815. It has an explained variance of 80% and all
factor loadings are clear and above 0.6. We remove all other models.
rm(FA8_119, FA8_119_8, FA8_119_15, FA8_119_19, FA8_119_819, FA8_119_1519, FA8_119_81519)
We display the loadings of our final model again for clarity.
print(FA8_119_815$loadings, cutoff = 0.3, sort = TRUE)
##
## Loadings:
## PA1 PA3 PA5 PA2 PA6 PA4 PA7 PA8
## Im3 0.816
## Im4 0.894
## Im5 0.655
## Im20 0.869
## Im21 0.731
## Im22 0.790
## Im1 0.896
## Im2 0.791
## Im10 0.931
## Im14 0.781
## Im6 0.885
## Im7 0.317 0.782
## Im17 0.828
## Im18 0.751
## Im12 0.706
## Im13 0.862
## Im16 0.696
## Im19 0.622
##
## PA1 PA3 PA5 PA2 PA6 PA4 PA7 PA8
## SS loadings 2.439 2.306 1.797 1.789 1.667 1.611 1.594 1.193
## Proportion Var 0.135 0.128 0.100 0.099 0.093 0.089 0.089 0.066
## Cumulative Var 0.135 0.264 0.363 0.463 0.555 0.645 0.733 0.800
Our final choice is FA8_119_815 and the 8 factors
identified are:
Factor 1: Organization and Arrangement
descriptions[c(3,4,5)]
## Variable
## 1: Im3
## 2: Im4
## 3: Im5
## Label
## 1: What do GLB represent from your point of view? Artistic Decoration of Sales Area
## 2: What do GLB represent from your point of view? Creative Decoration of Sales Area
## 3: What do GLB represent from your point of view? Appealing Arrangement of Shop Windows
Factor 2: Food
descriptions[c(10,14)]
## Variable
## 1: Im10
## 2: Im14
## Label
## 1: What do GLB represent from your point of view? Gourmet Food
## 2: What do GLB represent from your point of view? Gourmet specialities
Factor 3: Shopping experience
descriptions[c(20,21,22)]
## Variable
## 1: Im20
## 2: Im21
## 3: Im22
## Label
## 1: What do GLB represent from your point of view? Relaxing Shopping
## 2: What do GLB represent from your point of view? A Great Place to Stroll
## 3: What do GLB represent from your point of view? Intimate Shop Atmosphere
Factor 4: “Coolness” factor of the Galerie
descriptions[c(17,18)]
## Variable Label
## 1: Im17 What do GLB represent from your point of view? Are Trendy
## 2: Im18 What do GLB represent from your point of view? Are Hip
Factor 5: Assortment
descriptions[c(1,2)]
## Variable Label
## 1: Im1 What do GLB represent from your point of view? Large Assortment
## 2: Im2 What do GLB represent from your point of view? Assortment Variety
Factor 6: French Lifestyle
descriptions[c(6,7)]
## Variable Label
## 1: Im6 What do GLB represent from your point of view? France
## 2: Im7 What do GLB represent from your point of view? French Savoir-vivre
Factor 7: Luxury and Designer brands’ presence
descriptions[c(12,13)]
## Variable
## 1: Im12
## 2: Im13
## Label
## 1: What do GLB represent from your point of view? Luxury brands
## 2: What do GLB represent from your point of view? Up tp date Designer Brands
Factor 8: Professional appearance
descriptions[c(16,19)]
## Variable
## 1: Im16
## 2: Im19
## Label
## 1: What do GLB represent from your point of view? Professional Appearance Towards Customers
## 2: What do GLB represent from your point of view? Professional Organization
We can rename our eight factors inside the table which describes the total variance and the total explained variance.
colnames(FA8_119_815$Vaccounted) <- c('Organization', 'Experience', 'Assortment', 'Food',
'France', 'Coolness', 'Luxury', 'Professionalism')
FA8_119_815$Vaccounted
## Organization Experience Assortment Food France
## SS loadings 2.4389103 2.3062541 1.7966394 1.78872316 1.66690500
## Proportion Var 0.1354950 0.1281252 0.0998133 0.09937351 0.09260583
## Cumulative Var 0.1354950 0.2636202 0.3634335 0.46280705 0.55541288
## Proportion Explained 0.1694282 0.1602127 0.1248104 0.12426045 0.11579789
## Cumulative Proportion 0.1694282 0.3296409 0.4544512 0.57871168 0.69450957
## Coolness Luxury Professionalism
## SS loadings 1.61093847 1.5937435 1.19283822
## Proportion Var 0.08949658 0.0885413 0.06626879
## Cumulative Var 0.64490946 0.7334508 0.79971956
## Proportion Explained 0.11190996 0.1107154 0.08286504
## Cumulative Proportion 0.80641952 0.9171350 1.00000000
This allows us to quickly identify which are the most important factors. To facilitate the visualisation, we plot a Pareto chart for the total explained variance (TVE) below.
ggplot(mapping = aes(x = reorder(colnames(FA8_119_815$Vaccounted), -FA8_119_815$Vaccounted[4,]),
y = FA8_119_815$Vaccounted[4,])) +
geom_col() +
geom_line(aes(x = 1:8, y = FA8_119_815$Vaccounted[5,]), color = "red") +
geom_point(aes(x = 1:8, y = FA8_119_815$Vaccounted[5,])) +
theme_classic() +
labs(x = "Factor",
y = "Proportion of TVE",
title = "Pareto chart of TVE for FA8_119_815")
Organisation and Experience are the two most important factors in our model, they are the two constructs that people identify the most with Galeries Lafayette. After Experience, there is a small gap, and afterwards factors decrease very slowly in terms of their TVE portion, except for Professionalism at the end which is a bit lower. This means that the way Galeries Lafayette are organized in terms of displays and the atmosphere giving customers a unique experience are the two most important factors for the Galeries.
The “outcomes” for our models will be Repurchase intention and Cocreation Intention.
The “mediators” will be “customer Satisfaction” and “Affective Commitment”.
CFA is used to test whether the design construct (in our case
FA8_119_815) is actually appropriate in modelling the
factors. In the exploratory part we first developed a hypothesis of what
latent factors are described by the observable variables (Images).
We state the model:
In lavaan syntax, =~ means “is measured
by”. What we are designing here is a measurement model,
because the latent variables predict the observed exogenous variables.By
default, lavaan does allows the factors ot correlate
together. In order to avoid that, it is possible to use the syntax
factor1~~0*factor2 to set a correlation between two factors
to zero. However, this choice depends on the data we have, it is an
option to be considered if our models have poor fits.
We begin by creating a model which includes only the eight factors we created from the Images variables.
construct <- "
Organization =~ Im3 + Im4 + Im5
Food =~ Im10 + Im14
Shop_experience =~ Im20 + Im21 + Im22
Coolness =~ Im17 + Im18
Assortment =~ Im1 + Im2
French_lifestyle =~ Im6 + Im7
Luxury =~ Im12 + Im13
Professionalism =~ Im16 + Im19
"
fit <- lavaan::cfa(construct, data = df1, missing = "ML")
summary(fit, fit.measures = TRUE, estimates = FALSE)
## lavaan 0.6.15 ended normally after 104 iterations
##
## Estimator ML
## Optimization method NLMINB
## Number of model parameters 82
##
## Number of observations 553
## Number of missing patterns 75
##
## Model Test User Model:
##
## Test statistic 203.508
## Degrees of freedom 107
## P-value (Chi-square) 0.000
##
## Model Test Baseline Model:
##
## Test statistic 7217.692
## Degrees of freedom 153
## P-value 0.000
##
## User Model versus Baseline Model:
##
## Comparative Fit Index (CFI) 0.986
## Tucker-Lewis Index (TLI) 0.980
##
## Robust Comparative Fit Index (CFI) 0.986
## Robust Tucker-Lewis Index (TLI) 0.980
##
## Loglikelihood and Information Criteria:
##
## Loglikelihood user model (H0) -12234.854
## Loglikelihood unrestricted model (H1) -12133.100
##
## Akaike (AIC) 24633.709
## Bayesian (BIC) 24987.568
## Sample-size adjusted Bayesian (SABIC) 24727.264
##
## Root Mean Square Error of Approximation:
##
## RMSEA 0.040
## 90 Percent confidence interval - lower 0.032
## 90 Percent confidence interval - upper 0.049
## P-value H_0: RMSEA <= 0.050 0.971
## P-value H_0: RMSEA >= 0.080 0.000
##
## Robust RMSEA 0.041
## 90 Percent confidence interval - lower 0.033
## 90 Percent confidence interval - upper 0.050
## P-value H_0: Robust RMSEA <= 0.050 0.946
## P-value H_0: Robust RMSEA >= 0.080 0.000
##
## Standardized Root Mean Square Residual:
##
## SRMR 0.024
We analyse the output.
The first measure of model fitting is the result of the \(\chi^2\) test performed by the model. The model we created reproduces the correlation matrix between the variables of our dataset, i.e. the Image variables in our case. We call \(\Sigma\) the correlation matrix crafted by the model and \(S\) the real correlation matrix of our dataset. Thus, the \(\chi^2\) test makes the following null hypothesis: \[H_0 : \Sigma = S\] If the \(\chi^2\) test’s null hypothesis is rejected, it indicates that the model might not fit well the structure underlying our data, as the reproduced correlation matrix is too different from the original one. Here, as the \(p\)-value of the \(\chi^2\) test is 0.00, we should reject the null hypothesis, which indicates that the model does not fit the structure of our data well.
However, there is one important caveat. The \(\chi^2\) test is extremely sensitive to sample size. When sample size is small, it tends to be too lax, by failing to reject models which are inadapted. On the contrary, when sample size is large, the \(\chi^2\) test becomes too strict and can reject models which are appropriate. As we have a large sample size, the \(\chi^2\) test might be too harsh on our model.
Instead, we can observe another measure related to this statistical test. If the ratio between the value of the \(\chi^2\) test and the degrees of freedom is below 5, this indicates that the model could be a good fit.
We compute the ratio:
# value of test
fit@Fit@test$standard$stat
## [1] 203.508
# degrees of freedom
fit@Fit@test$standard$df
## [1] 107
# ratio
fit@Fit@test$standard$stat / fit@Fit@test$standard$df
## [1] 1.901944
As the ratio is smaller than 5, this is a good sign towards the fit of our model.
The second measure of fit is the RMSEA (Root Mean Square Error of Approximation). It is defined as
\[RMSEA = \sqrt{\dfrac{\chi - df}{df(N-1)}}\] where \(\chi\) is the value of the \(\chi^2\) test, \(df\) is the number of degrees of freedom and \(N\) is the sample size of the data, which is 553 in our case.
As we can see in the summary, we have an RMSEA of 0.04. If the RMSEA is below 0.05, the model is considered to be a good fit. If it is between 0.05 and 0.08, it is acceptable. The test of good fit tests the following statistical hypothesis:
\[H_0: RMSEA \leq 0.05\] In our case, the \(p\)-value of the test of good fit is 0.971, which means that the null hypothesis cannot be rejected. Thus, the RMSEA indicates that our model is a good fit.
The third measure of fit is the CFI (Comparative Fit Index). The CFI compares the model to a baseline model, which is usually a model with no correlation between variables, and quantifies the improvement in fitting by using the crafted model instead of the baseline. Having a CFI value above 0.95 indicates a good fit. As our model has a CFI of 0.98, this is a good sign.
Now that we have examined the three main fit measures, we take a look at the estimated coefficients of the model.
summary(fit, standardized = TRUE)
## lavaan 0.6.15 ended normally after 104 iterations
##
## Estimator ML
## Optimization method NLMINB
## Number of model parameters 82
##
## Number of observations 553
## Number of missing patterns 75
##
## Model Test User Model:
##
## Test statistic 203.508
## Degrees of freedom 107
## P-value (Chi-square) 0.000
##
## Parameter Estimates:
##
## Standard errors Standard
## Information Observed
## Observed information based on Hessian
##
## Latent Variables:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Organization =~
## Im3 1.000 1.236 0.937
## Im4 1.056 0.025 42.718 0.000 1.305 0.969
## Im5 0.818 0.034 23.812 0.000 1.011 0.760
## Food =~
## Im10 1.000 0.811 0.922
## Im14 1.018 0.036 28.129 0.000 0.825 0.954
## Shop_experience =~
## Im20 1.000 1.264 0.845
## Im21 0.849 0.041 20.811 0.000 1.073 0.783
## Im22 1.061 0.047 22.572 0.000 1.341 0.877
## Coolness =~
## Im17 1.000 1.208 0.971
## Im18 0.989 0.041 24.254 0.000 1.194 0.854
## Assortment =~
## Im1 1.000 1.309 0.983
## Im2 0.880 0.033 26.989 0.000 1.152 0.896
## French_lifestyle =~
## Im6 1.000 0.975 0.813
## Im7 1.185 0.070 16.849 0.000 1.155 0.955
## Luxury =~
## Im12 1.000 0.925 0.814
## Im13 1.197 0.068 17.500 0.000 1.108 0.919
## Professionalism =~
## Im16 1.000 0.921 0.766
## Im19 1.046 0.061 17.135 0.000 0.964 0.857
##
## Covariances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Organization ~~
## Food 0.417 0.050 8.384 0.000 0.416 0.416
## Shop_experienc 0.729 0.082 8.912 0.000 0.467 0.467
## Coolness 0.770 0.076 10.133 0.000 0.516 0.516
## Assortment 0.711 0.079 9.035 0.000 0.440 0.440
## French_lifstyl 0.402 0.063 6.355 0.000 0.334 0.334
## Luxury 0.529 0.063 8.358 0.000 0.463 0.463
## Professionalsm 0.743 0.071 10.458 0.000 0.653 0.653
## Food ~~
## Shop_experienc 0.302 0.051 5.941 0.000 0.295 0.295
## Coolness 0.318 0.047 6.810 0.000 0.325 0.325
## Assortment 0.327 0.050 6.578 0.000 0.309 0.309
## French_lifstyl 0.463 0.047 9.822 0.000 0.585 0.585
## Luxury 0.310 0.042 7.391 0.000 0.413 0.413
## Professionalsm 0.371 0.043 8.565 0.000 0.497 0.497
## Shop_experience ~~
## Coolness 0.786 0.081 9.708 0.000 0.515 0.515
## Assortment 0.741 0.085 8.745 0.000 0.448 0.448
## French_lifstyl 0.410 0.065 6.356 0.000 0.333 0.333
## Luxury 0.475 0.065 7.311 0.000 0.407 0.407
## Professionalsm 0.556 0.069 8.084 0.000 0.478 0.478
## Coolness ~~
## Assortment 0.817 0.079 10.366 0.000 0.517 0.517
## French_lifstyl 0.378 0.061 6.179 0.000 0.321 0.321
## Luxury 0.646 0.064 10.050 0.000 0.579 0.579
## Professionalsm 0.668 0.066 10.050 0.000 0.600 0.600
## Assortment ~~
## French_lifstyl 0.286 0.061 4.723 0.000 0.224 0.224
## Luxury 0.592 0.066 8.971 0.000 0.489 0.489
## Professionalsm 0.717 0.072 9.945 0.000 0.595 0.595
## French_lifestyle ~~
## Luxury 0.256 0.048 5.378 0.000 0.284 0.284
## Professionalsm 0.328 0.051 6.441 0.000 0.366 0.366
## Luxury ~~
## Professionalsm 0.441 0.054 8.215 0.000 0.517 0.517
##
## Intercepts:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .Im3 4.995 0.056 88.561 0.000 4.995 3.786
## .Im4 4.999 0.057 86.984 0.000 4.999 3.712
## .Im5 5.035 0.057 87.844 0.000 5.035 3.787
## .Im10 6.100 0.037 162.776 0.000 6.100 6.936
## .Im14 6.138 0.037 165.836 0.000 6.138 7.092
## .Im20 4.672 0.064 73.175 0.000 4.672 3.123
## .Im21 5.139 0.058 87.969 0.000 5.139 3.751
## .Im22 4.279 0.065 65.401 0.000 4.279 2.799
## .Im17 5.025 0.053 94.523 0.000 5.025 4.041
## .Im18 4.595 0.060 76.454 0.000 4.595 3.287
## .Im1 4.791 0.057 84.201 0.000 4.791 3.597
## .Im2 4.857 0.055 88.354 0.000 4.857 3.779
## .Im6 5.827 0.051 113.785 0.000 5.827 4.858
## .Im7 5.753 0.052 110.824 0.000 5.753 4.756
## .Im12 5.665 0.049 116.049 0.000 5.665 4.986
## .Im13 5.448 0.052 105.546 0.000 5.448 4.521
## .Im16 5.135 0.052 99.142 0.000 5.135 4.269
## .Im19 5.145 0.048 106.947 0.000 5.145 4.574
## Organization 0.000 0.000 0.000
## Food 0.000 0.000 0.000
## Shop_experienc 0.000 0.000 0.000
## Coolness 0.000 0.000 0.000
## Assortment 0.000 0.000 0.000
## French_lifstyl 0.000 0.000 0.000
## Luxury 0.000 0.000 0.000
## Professionalsm 0.000 0.000 0.000
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .Im3 0.213 0.024 8.752 0.000 0.213 0.122
## .Im4 0.109 0.024 4.527 0.000 0.109 0.060
## .Im5 0.747 0.049 15.217 0.000 0.747 0.422
## .Im10 0.116 0.020 5.959 0.000 0.116 0.150
## .Im14 0.068 0.019 3.503 0.000 0.068 0.090
## .Im20 0.640 0.061 10.459 0.000 0.640 0.286
## .Im21 0.726 0.057 12.661 0.000 0.726 0.386
## .Im22 0.539 0.063 8.489 0.000 0.539 0.231
## .Im17 0.088 0.045 1.957 0.050 0.088 0.057
## .Im18 0.528 0.054 9.725 0.000 0.528 0.270
## .Im1 0.060 0.051 1.183 0.237 0.060 0.034
## .Im2 0.325 0.044 7.411 0.000 0.325 0.197
## .Im6 0.488 0.056 8.724 0.000 0.488 0.339
## .Im7 0.128 0.066 1.939 0.053 0.128 0.088
## .Im12 0.435 0.047 9.159 0.000 0.435 0.337
## .Im13 0.226 0.058 3.892 0.000 0.226 0.155
## .Im16 0.599 0.052 11.487 0.000 0.599 0.414
## .Im19 0.337 0.045 7.424 0.000 0.337 0.266
## Organization 1.527 0.107 14.325 0.000 1.000 1.000
## Food 0.657 0.050 13.263 0.000 1.000 1.000
## Shop_experienc 1.597 0.138 11.609 0.000 1.000 1.000
## Coolness 1.459 0.104 14.068 0.000 1.000 1.000
## Assortment 1.714 0.119 14.453 0.000 1.000 1.000
## French_lifstyl 0.951 0.094 10.075 0.000 1.000 1.000
## Luxury 0.856 0.084 10.186 0.000 1.000 1.000
## Professionalsm 0.848 0.088 9.629 0.000 1.000 1.000
We see that the first items in the factors estimates (under latent
variables) are fixed to 1, that is because lavaan uses the
Marker Method for identification, by default, which is also why standard
error z-stat and P value are not estimated for the first loading. By
fixing that loading to 1 our model is at least identified (in our case,
\(df=107\) so it is over-identified).
Moreover, fixing the first loading to 1 scales the loadings (and the
variances) to the scale of the item being fixed. That is because our
latent variables are not observed, hence the measurement units must be
set. What the marker method does is pass the indicator’s metric (so the
first one for us) to the latent variable.
In that sense, if we consider the Assortment factor, a 1 unit
increase in Im1 will increase Im2 by 0.88.
We also see that the factors variances are scaled to 1, and that the residuals of the Images ( because they have the dot . in front of the name) have a variance, because as stated before they are being predicted by the latent factors. (the . in front of the variable name if it was under the Intercept output then it would signal that the variable is endogenous)
The standardized loadings are all above 0.6 (scaled as well and therefore comparable), meaning that the Images in each factors are good indicators for the latent variables. Those values, when squared, represent the item’s variance that is explained through the construct.
We plot our model:
lavaanPlot(name="plot_factors", fit, labels = NULL)
It is the latent factors that influence the observed items, not the other way around.
The function modindices quantifies the improvement of
fit in our model if we would allow certain elements to be free instead
of fixed.
# minimum.value = 10 allows to only display the quantified improvement which is >= 10
modindices(fit, minimum.value = 10, sort. = TRUE)
## lhs op rhs mi epc sepc.lv sepc.all sepc.nox
## 312 Im21 ~~ Im22 15.754 -0.292 -0.292 -0.466 -0.466
## 166 Assortment =~ Im20 14.627 -0.149 -0.195 -0.131 -0.131
## 300 Im20 ~~ Im21 11.943 0.233 0.233 0.342 0.342
## 126 Food =~ Im12 11.344 0.186 0.151 0.133 0.133
## 127 Food =~ Im13 11.344 -0.222 -0.180 -0.150 -0.150
The values of the modification indices, under column mi,
aren’t very high, which is reassuring. We try adding
Im21 ~~ Im22 to the model to see if that brings
improvement. If Images 21 and 22 have a big modification index, it means
that they share variance which is unexplained through their
construct.
construct2 <- "
Organization =~ Im3 + Im4 + Im5
Food =~ Im10 + Im14
Shop_experience =~ Im20 + Im21 + Im22
Coolness =~ Im17 + Im18
Assortment =~ Im1 + Im2
French_lifestyle =~ Im6 + Im7
Luxury =~ Im12 + Im13
Professionalism =~ Im16 + Im19
Im21 ~~ Im22
"
fit2 <- cfa(construct2, data = df1, missing = "ML")
summary(fit2, fit.measures = TRUE, standardized = TRUE)
## lavaan 0.6.15 ended normally after 110 iterations
##
## Estimator ML
## Optimization method NLMINB
## Number of model parameters 83
##
## Number of observations 553
## Number of missing patterns 75
##
## Model Test User Model:
##
## Test statistic 184.164
## Degrees of freedom 106
## P-value (Chi-square) 0.000
##
## Model Test Baseline Model:
##
## Test statistic 7217.692
## Degrees of freedom 153
## P-value 0.000
##
## User Model versus Baseline Model:
##
## Comparative Fit Index (CFI) 0.989
## Tucker-Lewis Index (TLI) 0.984
##
## Robust Comparative Fit Index (CFI) 0.989
## Robust Tucker-Lewis Index (TLI) 0.984
##
## Loglikelihood and Information Criteria:
##
## Loglikelihood user model (H0) -12225.182
## Loglikelihood unrestricted model (H1) -12133.100
##
## Akaike (AIC) 24616.365
## Bayesian (BIC) 24974.540
## Sample-size adjusted Bayesian (SABIC) 24711.061
##
## Root Mean Square Error of Approximation:
##
## RMSEA 0.037
## 90 Percent confidence interval - lower 0.028
## 90 Percent confidence interval - upper 0.045
## P-value H_0: RMSEA <= 0.050 0.995
## P-value H_0: RMSEA >= 0.080 0.000
##
## Robust RMSEA 0.038
## 90 Percent confidence interval - lower 0.028
## 90 Percent confidence interval - upper 0.047
## P-value H_0: Robust RMSEA <= 0.050 0.990
## P-value H_0: Robust RMSEA >= 0.080 0.000
##
## Standardized Root Mean Square Residual:
##
## SRMR 0.024
##
## Parameter Estimates:
##
## Standard errors Standard
## Information Observed
## Observed information based on Hessian
##
## Latent Variables:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Organization =~
## Im3 1.000 1.237 0.937
## Im4 1.055 0.025 42.735 0.000 1.305 0.969
## Im5 0.817 0.034 23.815 0.000 1.011 0.760
## Food =~
## Im10 1.000 0.811 0.922
## Im14 1.018 0.036 28.131 0.000 0.825 0.954
## Shop_experience =~
## Im20 1.000 1.154 0.771
## Im21 1.037 0.073 14.225 0.000 1.197 0.873
## Im22 1.287 0.086 14.952 0.000 1.485 0.971
## Coolness =~
## Im17 1.000 1.209 0.973
## Im18 0.986 0.040 24.370 0.000 1.192 0.853
## Assortment =~
## Im1 1.000 1.312 0.985
## Im2 0.876 0.032 27.281 0.000 1.149 0.894
## French_lifestyle =~
## Im6 1.000 0.977 0.814
## Im7 1.181 0.069 17.031 0.000 1.153 0.954
## Luxury =~
## Im12 1.000 0.926 0.815
## Im13 1.195 0.068 17.488 0.000 1.106 0.918
## Professionalism =~
## Im16 1.000 0.921 0.765
## Im19 1.047 0.061 17.112 0.000 0.964 0.857
##
## Covariances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .Im21 ~~
## .Im22 -0.377 0.098 -3.834 0.000 -0.377 -1.539
## Organization ~~
## Food 0.418 0.050 8.386 0.000 0.416 0.416
## Shop_experienc 0.627 0.081 7.748 0.000 0.440 0.440
## Coolness 0.770 0.076 10.128 0.000 0.515 0.515
## Assortment 0.712 0.079 9.037 0.000 0.439 0.439
## French_lifstyl 0.404 0.063 6.377 0.000 0.334 0.334
## Luxury 0.530 0.063 8.368 0.000 0.463 0.463
## Professionalsm 0.743 0.071 10.453 0.000 0.653 0.653
## Food ~~
## Shop_experienc 0.255 0.046 5.498 0.000 0.273 0.273
## Coolness 0.318 0.047 6.797 0.000 0.324 0.324
## Assortment 0.327 0.050 6.561 0.000 0.307 0.307
## French_lifstyl 0.464 0.047 9.859 0.000 0.585 0.585
## Luxury 0.310 0.042 7.397 0.000 0.413 0.413
## Professionalsm 0.371 0.043 8.563 0.000 0.497 0.497
## Shop_experience ~~
## Coolness 0.680 0.082 8.318 0.000 0.487 0.487
## Assortment 0.665 0.082 8.114 0.000 0.439 0.439
## French_lifstyl 0.363 0.059 6.112 0.000 0.322 0.322
## Luxury 0.368 0.063 5.826 0.000 0.345 0.345
## Professionalsm 0.464 0.067 6.922 0.000 0.437 0.437
## Coolness ~~
## Assortment 0.818 0.079 10.366 0.000 0.515 0.515
## French_lifstyl 0.379 0.061 6.192 0.000 0.321 0.321
## Luxury 0.647 0.064 10.059 0.000 0.578 0.578
## Professionalsm 0.667 0.066 10.046 0.000 0.599 0.599
## Assortment ~~
## French_lifstyl 0.285 0.061 4.704 0.000 0.223 0.223
## Luxury 0.594 0.066 8.994 0.000 0.489 0.489
## Professionalsm 0.716 0.072 9.931 0.000 0.593 0.593
## French_lifestyle ~~
## Luxury 0.256 0.048 5.378 0.000 0.283 0.283
## Professionalsm 0.329 0.051 6.455 0.000 0.366 0.366
## Luxury ~~
## Professionalsm 0.441 0.054 8.219 0.000 0.517 0.517
##
## Intercepts:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .Im3 4.995 0.056 88.561 0.000 4.995 3.786
## .Im4 4.998 0.057 86.987 0.000 4.998 3.712
## .Im5 5.035 0.057 87.841 0.000 5.035 3.787
## .Im10 6.100 0.037 162.772 0.000 6.100 6.936
## .Im14 6.138 0.037 165.832 0.000 6.138 7.092
## .Im20 4.670 0.064 73.134 0.000 4.670 3.121
## .Im21 5.140 0.058 87.967 0.000 5.140 3.750
## .Im22 4.279 0.065 65.392 0.000 4.279 2.798
## .Im17 5.025 0.053 94.545 0.000 5.025 4.042
## .Im18 4.594 0.060 76.448 0.000 4.594 3.287
## .Im1 4.790 0.057 84.213 0.000 4.790 3.597
## .Im2 4.856 0.055 88.334 0.000 4.856 3.778
## .Im6 5.827 0.051 113.785 0.000 5.827 4.858
## .Im7 5.753 0.052 110.849 0.000 5.753 4.757
## .Im12 5.665 0.049 116.071 0.000 5.665 4.987
## .Im13 5.448 0.052 105.564 0.000 5.448 4.521
## .Im16 5.135 0.052 99.132 0.000 5.135 4.268
## .Im19 5.145 0.048 106.944 0.000 5.145 4.574
## Organization 0.000 0.000 0.000
## Food 0.000 0.000 0.000
## Shop_experienc 0.000 0.000 0.000
## Coolness 0.000 0.000 0.000
## Assortment 0.000 0.000 0.000
## French_lifstyl 0.000 0.000 0.000
## Luxury 0.000 0.000 0.000
## Professionalsm 0.000 0.000 0.000
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .Im3 0.212 0.024 8.691 0.000 0.212 0.122
## .Im4 0.111 0.024 4.605 0.000 0.111 0.061
## .Im5 0.747 0.049 15.217 0.000 0.747 0.422
## .Im10 0.116 0.020 5.950 0.000 0.116 0.150
## .Im14 0.068 0.019 3.513 0.000 0.068 0.091
## .Im20 0.907 0.091 10.013 0.000 0.907 0.405
## .Im21 0.445 0.095 4.702 0.000 0.445 0.237
## .Im22 0.135 0.129 1.047 0.295 0.135 0.058
## .Im17 0.084 0.044 1.879 0.060 0.084 0.054
## .Im18 0.533 0.054 9.853 0.000 0.533 0.273
## .Im1 0.051 0.050 1.029 0.303 0.051 0.029
## .Im2 0.332 0.043 7.669 0.000 0.332 0.201
## .Im6 0.485 0.055 8.752 0.000 0.485 0.337
## .Im7 0.133 0.065 2.036 0.042 0.133 0.091
## .Im12 0.433 0.048 9.115 0.000 0.433 0.336
## .Im13 0.228 0.058 3.928 0.000 0.228 0.157
## .Im16 0.600 0.052 11.494 0.000 0.600 0.415
## .Im19 0.336 0.045 7.386 0.000 0.336 0.265
## Organization 1.529 0.107 14.338 0.000 1.000 1.000
## Food 0.657 0.050 13.265 0.000 1.000 1.000
## Shop_experienc 1.332 0.143 9.336 0.000 1.000 1.000
## Coolness 1.462 0.103 14.130 0.000 1.000 1.000
## Assortment 1.722 0.118 14.564 0.000 1.000 1.000
## French_lifstyl 0.954 0.094 10.130 0.000 1.000 1.000
## Luxury 0.857 0.084 10.194 0.000 1.000 1.000
## Professionalsm 0.847 0.088 9.617 0.000 1.000 1.000
The model we obtain is very similar to our previous one, with ony the
loadings of the Shopping Experience construct which change. Overall,
there is no significative improvement in the measures of fit. Thus, we
keep our model fit as it is.
We tried to allow other elements of our model to correlate freely, but very little changed in terms of performance metrics, thus we will not show them here in order to remain concise.
From here on, Customer Satisfaction will be coded as SAT
and Affective Commitment as COM.
We proceed with a stepwise construction of the model, which is structural since we are introducing a relationship between latent variables.
The paths between latent variables (so the factors and the mediators
COM and SAT) are modeled as regression
paths.
construct_m <- "
# MEASUREMENT MODELS
# Factors
Organization =~ Im3 + Im4 + Im5
Food =~ Im10 + Im14
Shop_experience =~ Im20 + Im21 + Im22
Coolness =~ Im17 + Im18
Assortment =~ Im1 + Im2
French_lifestyle =~ Im6 + Im7
Luxury =~ Im12 + Im13
Professionalism =~ Im16 + Im19
# mediators
COM =~ COM_A1 + COM_A2 + COM_A3 + COM_A4
SAT =~ SAT_1 + SAT_2 + SAT_3
# STRUCTURAL MODELS
# Paths from factors to mediators (latent to latent)
COM ~ a*Assortment + b*French_lifestyle + c*Shop_experience + d*Organization + e*Luxury + f*Coolness + g*Food + h*Professionalism
SAT ~ i*Assortment + l*French_lifestyle + m*Shop_experience + n*Organization + o*Luxury + p*Coolness + q*Food + r*Professionalism
# Total effects for mediators
# to COM
A_COM := a
Fr_COM := b
Sh_COM := c
Org_COM := d
Lux_COM := e
Cool_COM := f
Food_COM := g
Prof_COM := h
# to SAT
A_SAT := i
Fr_SAT := l
Sh_SAT := m
Org_SAT := n
Lux_SAT := o
Cool_SAT := p
Food_SAT := q
Prof_SAT := r
"
fitm <- cfa(construct_m, data = df1, missing = "ML")
summary(fitm, fit.measures = TRUE, standardized = TRUE)
## lavaan 0.6.15 ended normally after 116 iterations
##
## Estimator ML
## Optimization method NLMINB
## Number of model parameters 120
##
## Number of observations 553
## Number of missing patterns 104
##
## Model Test User Model:
##
## Test statistic 369.170
## Degrees of freedom 230
## P-value (Chi-square) 0.000
##
## Model Test Baseline Model:
##
## Test statistic 9610.241
## Degrees of freedom 300
## P-value 0.000
##
## User Model versus Baseline Model:
##
## Comparative Fit Index (CFI) 0.985
## Tucker-Lewis Index (TLI) 0.981
##
## Robust Comparative Fit Index (CFI) 0.985
## Robust Tucker-Lewis Index (TLI) 0.980
##
## Loglikelihood and Information Criteria:
##
## Loglikelihood user model (H0) -17557.145
## Loglikelihood unrestricted model (H1) -17372.560
##
## Akaike (AIC) 35354.290
## Bayesian (BIC) 35872.133
## Sample-size adjusted Bayesian (SABIC) 35491.200
##
## Root Mean Square Error of Approximation:
##
## RMSEA 0.033
## 90 Percent confidence interval - lower 0.027
## 90 Percent confidence interval - upper 0.039
## P-value H_0: RMSEA <= 0.050 1.000
## P-value H_0: RMSEA >= 0.080 0.000
##
## Robust RMSEA 0.034
## 90 Percent confidence interval - lower 0.027
## 90 Percent confidence interval - upper 0.040
## P-value H_0: Robust RMSEA <= 0.050 1.000
## P-value H_0: Robust RMSEA >= 0.080 0.000
##
## Standardized Root Mean Square Residual:
##
## SRMR 0.025
##
## Parameter Estimates:
##
## Standard errors Standard
## Information Observed
## Observed information based on Hessian
##
## Latent Variables:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Organization =~
## Im3 1.000 1.236 0.936
## Im4 1.057 0.025 42.745 0.000 1.306 0.970
## Im5 0.818 0.034 23.809 0.000 1.011 0.760
## Food =~
## Im10 1.000 0.810 0.921
## Im14 1.021 0.036 28.295 0.000 0.827 0.955
## Shop_experience =~
## Im20 1.000 1.259 0.842
## Im21 0.855 0.041 20.890 0.000 1.076 0.785
## Im22 1.065 0.046 22.927 0.000 1.341 0.878
## Coolness =~
## Im17 1.000 1.208 0.972
## Im18 0.987 0.041 24.228 0.000 1.192 0.853
## Assortment =~
## Im1 1.000 1.303 0.979
## Im2 0.888 0.032 28.110 0.000 1.157 0.900
## French_lifestyle =~
## Im6 1.000 0.982 0.819
## Im7 1.169 0.067 17.527 0.000 1.148 0.949
## Luxury =~
## Im12 1.000 0.924 0.814
## Im13 1.199 0.068 17.594 0.000 1.108 0.920
## Professionalism =~
## Im16 1.000 0.921 0.766
## Im19 1.045 0.059 17.711 0.000 0.962 0.856
## COM =~
## COM_A1 1.000 1.135 0.790
## COM_A2 1.178 0.055 21.232 0.000 1.337 0.833
## COM_A3 1.177 0.059 19.885 0.000 1.336 0.821
## COM_A4 1.292 0.063 20.608 0.000 1.467 0.845
## SAT =~
## SAT_1 1.000 0.882 0.865
## SAT_2 0.932 0.049 19.104 0.000 0.822 0.818
## SAT_3 0.812 0.055 14.832 0.000 0.716 0.626
##
## Regressions:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## COM ~
## Assortment (a) 0.117 0.049 2.401 0.016 0.135 0.135
## Frnch_lfst (b) 0.214 0.063 3.395 0.001 0.185 0.185
## Shop_xprnc (c) 0.370 0.051 7.219 0.000 0.410 0.410
## Organizatn (d) 0.007 0.054 0.131 0.896 0.008 0.008
## Luxury (e) -0.159 0.072 -2.210 0.027 -0.129 -0.129
## Coolness (f) 0.009 0.057 0.159 0.873 0.010 0.010
## Food (g) 0.049 0.082 0.600 0.548 0.035 0.035
## Profssnlsm (h) 0.060 0.100 0.604 0.546 0.049 0.049
## SAT ~
## Assortment (i) 0.136 0.039 3.503 0.000 0.202 0.202
## Frnch_lfst (l) 0.099 0.049 2.019 0.044 0.110 0.110
## Shop_xprnc (m) 0.036 0.038 0.955 0.340 0.052 0.052
## Organizatn (n) -0.100 0.042 -2.346 0.019 -0.139 -0.139
## Luxury (o) -0.011 0.056 -0.198 0.843 -0.012 -0.012
## Coolness (p) 0.013 0.045 0.301 0.763 0.018 0.018
## Food (q) 0.086 0.064 1.343 0.179 0.079 0.079
## Profssnlsm (r) 0.435 0.082 5.277 0.000 0.455 0.455
##
## Covariances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Organization ~~
## Food 0.417 0.050 8.384 0.000 0.417 0.417
## Shop_experienc 0.727 0.082 8.916 0.000 0.468 0.468
## Coolness 0.769 0.076 10.122 0.000 0.515 0.515
## Assortment 0.709 0.079 9.028 0.000 0.441 0.441
## French_lifstyl 0.408 0.063 6.466 0.000 0.337 0.337
## Luxury 0.528 0.063 8.358 0.000 0.463 0.463
## Professionalsm 0.744 0.071 10.524 0.000 0.653 0.653
## Food ~~
## Shop_experienc 0.300 0.051 5.942 0.000 0.295 0.295
## Coolness 0.318 0.047 6.806 0.000 0.325 0.325
## Assortment 0.327 0.050 6.594 0.000 0.310 0.310
## French_lifstyl 0.467 0.047 9.960 0.000 0.587 0.587
## Luxury 0.309 0.042 7.392 0.000 0.413 0.413
## Professionalsm 0.371 0.043 8.602 0.000 0.498 0.498
## Shop_experience ~~
## Coolness 0.784 0.081 9.709 0.000 0.515 0.515
## Assortment 0.734 0.084 8.715 0.000 0.448 0.448
## French_lifstyl 0.413 0.065 6.396 0.000 0.334 0.334
## Luxury 0.472 0.065 7.307 0.000 0.406 0.406
## Professionalsm 0.553 0.068 8.107 0.000 0.477 0.477
## Coolness ~~
## Assortment 0.816 0.079 10.366 0.000 0.518 0.518
## French_lifstyl 0.385 0.061 6.292 0.000 0.324 0.324
## Luxury 0.645 0.064 10.048 0.000 0.578 0.578
## Professionalsm 0.668 0.066 10.101 0.000 0.600 0.600
## Assortment ~~
## French_lifstyl 0.290 0.061 4.769 0.000 0.227 0.227
## Luxury 0.588 0.066 8.936 0.000 0.488 0.488
## Professionalsm 0.718 0.072 10.020 0.000 0.598 0.598
## French_lifestyle ~~
## Luxury 0.257 0.048 5.383 0.000 0.283 0.283
## Professionalsm 0.334 0.051 6.537 0.000 0.369 0.369
## Luxury ~~
## Professionalsm 0.440 0.053 8.231 0.000 0.517 0.517
## .COM ~~
## .SAT 0.213 0.037 5.735 0.000 0.340 0.340
##
## Intercepts:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .Im3 4.995 0.056 88.562 0.000 4.995 3.786
## .Im4 4.998 0.057 86.993 0.000 4.998 3.712
## .Im5 5.035 0.057 87.847 0.000 5.035 3.787
## .Im10 6.100 0.037 162.760 0.000 6.100 6.935
## .Im14 6.138 0.037 165.812 0.000 6.138 7.091
## .Im20 4.672 0.064 73.213 0.000 4.672 3.125
## .Im21 5.139 0.058 87.978 0.000 5.139 3.750
## .Im22 4.280 0.065 65.482 0.000 4.280 2.802
## .Im17 5.025 0.053 94.564 0.000 5.025 4.043
## .Im18 4.595 0.060 76.472 0.000 4.595 3.288
## .Im1 4.792 0.057 84.286 0.000 4.792 3.599
## .Im2 4.858 0.055 88.412 0.000 4.858 3.780
## .Im6 5.827 0.051 113.795 0.000 5.827 4.858
## .Im7 5.754 0.052 110.801 0.000 5.754 4.755
## .Im12 5.664 0.049 116.075 0.000 5.664 4.987
## .Im13 5.447 0.052 105.560 0.000 5.447 4.521
## .Im16 5.135 0.052 99.178 0.000 5.135 4.269
## .Im19 5.145 0.048 106.993 0.000 5.145 4.575
## .COM_A1 4.287 0.061 69.761 0.000 4.287 2.984
## .COM_A2 3.887 0.069 56.730 0.000 3.887 2.422
## .COM_A3 3.541 0.070 50.814 0.000 3.541 2.176
## .COM_A4 3.457 0.074 46.696 0.000 3.457 1.992
## .SAT_1 5.343 0.043 122.938 0.000 5.343 5.238
## .SAT_2 5.483 0.043 127.851 0.000 5.483 5.460
## .SAT_3 5.458 0.050 109.419 0.000 5.458 4.773
## Organization 0.000 0.000 0.000
## Food 0.000 0.000 0.000
## Shop_experienc 0.000 0.000 0.000
## Coolness 0.000 0.000 0.000
## Assortment 0.000 0.000 0.000
## French_lifstyl 0.000 0.000 0.000
## Luxury 0.000 0.000 0.000
## Professionalsm 0.000 0.000 0.000
## .COM 0.000 0.000 0.000
## .SAT 0.000 0.000 0.000
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .Im3 0.214 0.024 8.790 0.000 0.214 0.123
## .Im4 0.108 0.024 4.498 0.000 0.108 0.060
## .Im5 0.747 0.049 15.219 0.000 0.747 0.422
## .Im10 0.118 0.019 6.087 0.000 0.118 0.152
## .Im14 0.066 0.019 3.456 0.001 0.066 0.088
## .Im20 0.651 0.060 10.836 0.000 0.651 0.291
## .Im21 0.720 0.057 12.699 0.000 0.720 0.384
## .Im22 0.536 0.061 8.718 0.000 0.536 0.230
## .Im17 0.085 0.045 1.885 0.059 0.085 0.055
## .Im18 0.532 0.054 9.764 0.000 0.532 0.272
## .Im1 0.075 0.047 1.578 0.115 0.075 0.042
## .Im2 0.314 0.042 7.520 0.000 0.314 0.190
## .Im6 0.474 0.054 8.804 0.000 0.474 0.330
## .Im7 0.145 0.062 2.339 0.019 0.145 0.099
## .Im12 0.436 0.047 9.238 0.000 0.436 0.338
## .Im13 0.224 0.058 3.884 0.000 0.224 0.154
## .Im16 0.598 0.051 11.820 0.000 0.598 0.414
## .Im19 0.339 0.043 7.815 0.000 0.339 0.268
## .COM_A1 0.776 0.060 13.016 0.000 0.776 0.376
## .COM_A2 0.788 0.067 11.853 0.000 0.788 0.306
## .COM_A3 0.863 0.070 12.283 0.000 0.863 0.326
## .COM_A4 0.861 0.075 11.507 0.000 0.861 0.286
## .SAT_1 0.263 0.034 7.841 0.000 0.263 0.252
## .SAT_2 0.333 0.033 10.092 0.000 0.333 0.331
## .SAT_3 0.795 0.055 14.319 0.000 0.795 0.608
## Organization 1.527 0.107 14.321 0.000 1.000 1.000
## Food 0.656 0.049 13.259 0.000 1.000 1.000
## Shop_experienc 1.584 0.137 11.604 0.000 1.000 1.000
## Coolness 1.460 0.104 14.094 0.000 1.000 1.000
## Assortment 1.698 0.117 14.518 0.000 1.000 1.000
## French_lifstyl 0.965 0.094 10.289 0.000 1.000 1.000
## Luxury 0.854 0.084 10.199 0.000 1.000 1.000
## Professionalsm 0.849 0.087 9.733 0.000 1.000 1.000
## .COM 0.858 0.086 9.975 0.000 0.666 0.666
## .SAT 0.459 0.047 9.710 0.000 0.590 0.590
##
## Defined Parameters:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## A_COM 0.117 0.049 2.401 0.016 0.135 0.135
## Fr_COM 0.214 0.063 3.395 0.001 0.185 0.185
## Sh_COM 0.370 0.051 7.219 0.000 0.410 0.410
## Org_COM 0.007 0.054 0.131 0.896 0.008 0.008
## Lux_COM -0.159 0.072 -2.210 0.027 -0.129 -0.129
## Cool_COM 0.009 0.057 0.159 0.873 0.010 0.010
## Food_COM 0.049 0.082 0.600 0.548 0.035 0.035
## Prof_COM 0.060 0.100 0.604 0.546 0.049 0.049
## A_SAT 0.136 0.039 3.503 0.000 0.202 0.202
## Fr_SAT 0.099 0.049 2.019 0.044 0.110 0.110
## Sh_SAT 0.036 0.038 0.955 0.340 0.052 0.052
## Org_SAT -0.100 0.042 -2.346 0.019 -0.139 -0.139
## Lux_SAT -0.011 0.056 -0.198 0.843 -0.012 -0.012
## Cool_SAT 0.013 0.045 0.301 0.763 0.018 0.018
## Food_SAT 0.086 0.064 1.343 0.179 0.079 0.079
## Prof_SAT 0.435 0.082 5.277 0.000 0.455 0.455
The model remains of good fit:
When we observe the latent variables output, we notice that all
standardized loadings are above 0.6, SAT_3 by little but it
is acceptable. COM and SAT are good
measures.
semPaths(fitm, what = "path", whatLabels = "std", style = "lisrel", fade = TRUE,
rotation = 2, layout = "tree3", mar = c(1, 1, 1, 1),
nCharNodes = 7,shapeMan = "rectangle", sizeMan = 8, sizeMan2 = 5,
curvePivot = TRUE, edge.label.cex = 1, edge.color = "darkblue")
The path linking COM and SAT is because we are allowing for the two to predict each-other.
From here on, Repurchase Intention will be coded as
C_REP and Co-creation Intention as C_CR.
Repurchase intention is the latent construct which dictates the
answers to the questions from C_REP1 to
C_REP3, all regarding the customer intentions for future
purchasing from the Galeries. Co-creation intention is the latent factor
which influences whether the interviewed customers are willing to
participate in workshops or be interviewed again for similar surveys. Is
is encoded by questions C_COCRE1 to
C_COCRE4
We display the questions below.
descriptions[23:29,2]
## Label
## 1: C_COCRE1 -CO-CREATION -I would like to participate in an expert-workshop to improve the assortment of Galeries Lafayette Berlin.
## 2: C_COCRE2 -CO-CREATION -I would be available to take part in another survey at Galeries Lafayette Berlin.
## 3: C_COCRE3 -CO-CREATION -I would like to become a member of a customer group whose opinion is obtained for new products and major changes.
## 4: C_COCRE4 -CO-CREATION -I would like to participate in planning and designing special events (e.g. fashion show, introduction of new car models) if asked.
## 5: C_REP1 -REPURCHASE - I will continue to be a loyal customer of Galeries Lafayette Berlin.
## 6: C_REP2 -REPURCHASE - I intend to shop at Galeries Lafayette Berlin in the future.
## 7: C_REP3 -REPURCAHSE - I will surely visit Galeries Lafayette Berlin in the future.
The paths between latent variables (so the factors, the mediators
COM and SAT, and the outcomes
C_REP and C_CR) are modeled as regression
paths. lavaan uses least squares in its estimation of
regression paths. In fact, the output if we were to run a
lm() model would almost be identical if not for rounding
errors.
construct3 <- "
# MEASUREMENT MODELS
# Factors
Organization =~ Im3 + Im4 + Im5
Food =~ Im10 + Im14
Shop_experience =~ Im20 + Im21 + Im22
Coolness =~ Im17 + Im18
Assortment =~ Im1 + Im2
French_lifestyle =~ Im6 + Im7
Luxury =~ Im12 + Im13
Professionalism =~ Im16 + Im19
# Mediators
COM =~ COM_A1 + COM_A2 + COM_A3 + COM_A4
SAT =~ SAT_1 + SAT_2 + SAT_3
# Outcomes
C_REP =~ C_REP1 + C_REP2 + C_REP3
C_CR =~ C_CR1 + C_CR2 + C_CR3 + C_CR4
# STRUCTURAL MODELS
# Paths from factors to mediators
COM ~ a*Assortment + b*French_lifestyle + c*Shop_experience + d*Organization + e*Luxury + f*Coolness + g*Food + h*Professionalism
SAT ~ i*Assortment + l*French_lifestyle + m*Shop_experience + n*Organization + o*Luxury + p*Coolness + q*Food + r*Professionalism
# Total effects for mediators
# to COM
A_COM := a
Fr_COM := b
Sh_COM := c
Org_COM := d
Lux_COM := e
Cool_COM := f
Food_COM := g
Prof_COM := h
# to SAT
A_SAT := i
Fr_SAT := l
Sh_SAT := m
Org_SAT := n
Lux_SAT := o
Cool_SAT := p
Food_SAT := q
Prof_SAT := r
# Paths from mediators to outcomes
#C_REP ~ u*COM + v*SAT
#C_CR ~ w*COM + z*SAT
# Paths from factors to outcome
C_REP ~ aa*Assortment + bb*French_lifestyle + cc*Shop_experience + dd*Organization + ee*Luxury + ff*Coolness + gg*Food + hh*Professionalism + u*COM + v*SAT
C_CR ~ ii*Assortment + ll*French_lifestyle + mm*Shop_experience + nn*Organization + oo*Luxury + pp*Coolness + qq*Food + rr*Professionalism + w*COM + z*SAT
# Indirect paths from factors to outcomes
# passing through COM (edges [a,...,h] and [u,w])
#C_REP ~ ua*Assortment + ub*French_lifestyle + uc*Shop_experience + ud*Organization + ue*Luxury + uf*Coolness + ug*Food + uh*Professionalism
#C_CR ~ wa*Assortment + wb*French_lifestyle + wc*Shop_experience + wd*Organization + we*Luxury + wf*Coolness + wg*Food + wh*Professionalism
# define indirect effects through COM
# to C_REP
ua := a*u
ub := b*u
uc := c*u
ud := d*u
ue := e*u
uf := f*u
ug := g*u
uh := h*u
# to C_CR
wa := a*w
wb := b*w
wc := c*w
wd := d*w
we := e*w
wf := f*w
wg := g*w
wh := h*w
# passing through SAT (edges [i,...,r] and [v,z])
#C_REP ~ vi*Assortment + vl*French_lifestyle + vm*Shop_experience + vn*Organization + vo*Luxury + vp*Coolness + vq*Food + vr*Professionalism
#C_CR ~ zi*Assortment + zl*French_lifestyle + zm*Shop_experience + zn*Organization + zo*Luxury + zp*Coolness + zq*Food + zr*Professionalism
# define indirect effects
# through COM
vi := i*v
vl := l*v
vm := m*v
vn := n*v
vo := o*v
vp := p*v
vq := q*v
vr := r*v
# Through SAT
zi := i*z
zl := l*z
zm := m*z
zn := n*z
zo := o*z
zp := p*z
zq := q*z
zr := r*z
# Total effects, from factors to outcomes
# to C_REP
A_REP := aa + ua + vi
Fr_REP := bb + ub + vl
Sh_REP := cc + uc + vm
Org_REP := dd + ud + vn
Lux_REP := ee + ue + vo
Cool_REP := ff + uf + vp
Food_REP := gg + ug + vq
Prof_REP := hh + uh + vr
# to C_CR
A_CR := ii + wa + zi
Fr_CR := ll + wb + zl
Sh_CR := mm + wc + zm
Org_CR := nn + wd + zn
Lux_CR := oo + we + zo
Cool_CR := pp + wf + zp
Food_CR := qq + wg + zq
Prof_CR := rr + wh + zr
"
fit3 <- cfa(construct3, data = df1, missing = "ML")
fitsum <- summary(fit3, fit.measures = TRUE, standardized = TRUE)
fitsum
## lavaan 0.6.15 ended normally after 146 iterations
##
## Estimator ML
## Optimization method NLMINB
## Number of model parameters 161
##
## Number of observations 553
## Number of missing patterns 141
##
## Model Test User Model:
##
## Test statistic 680.445
## Degrees of freedom 399
## P-value (Chi-square) 0.000
##
## Model Test Baseline Model:
##
## Test statistic 11878.382
## Degrees of freedom 496
## P-value 0.000
##
## User Model versus Baseline Model:
##
## Comparative Fit Index (CFI) 0.975
## Tucker-Lewis Index (TLI) 0.969
##
## Robust Comparative Fit Index (CFI) 0.975
## Robust Tucker-Lewis Index (TLI) 0.969
##
## Loglikelihood and Information Criteria:
##
## Loglikelihood user model (H0) -22637.834
## Loglikelihood unrestricted model (H1) -22297.611
##
## Akaike (AIC) 45597.668
## Bayesian (BIC) 46292.440
## Sample-size adjusted Bayesian (SABIC) 45781.355
##
## Root Mean Square Error of Approximation:
##
## RMSEA 0.036
## 90 Percent confidence interval - lower 0.031
## 90 Percent confidence interval - upper 0.040
## P-value H_0: RMSEA <= 0.050 1.000
## P-value H_0: RMSEA >= 0.080 0.000
##
## Robust RMSEA 0.036
## 90 Percent confidence interval - lower 0.032
## 90 Percent confidence interval - upper 0.041
## P-value H_0: Robust RMSEA <= 0.050 1.000
## P-value H_0: Robust RMSEA >= 0.080 0.000
##
## Standardized Root Mean Square Residual:
##
## SRMR 0.043
##
## Parameter Estimates:
##
## Standard errors Standard
## Information Observed
## Observed information based on Hessian
##
## Latent Variables:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Organization =~
## Im3 1.000 1.235 0.936
## Im4 1.057 0.025 42.734 0.000 1.306 0.970
## Im5 0.818 0.034 23.804 0.000 1.010 0.760
## Food =~
## Im10 1.000 0.808 0.919
## Im14 1.024 0.036 28.283 0.000 0.828 0.957
## Shop_experience =~
## Im20 1.000 1.261 0.844
## Im21 0.857 0.041 20.995 0.000 1.082 0.789
## Im22 1.057 0.046 23.013 0.000 1.333 0.873
## Coolness =~
## Im17 1.000 1.209 0.972
## Im18 0.986 0.041 24.213 0.000 1.192 0.853
## Assortment =~
## Im1 1.000 1.301 0.977
## Im2 0.891 0.031 28.277 0.000 1.158 0.901
## French_lifestyle =~
## Im6 1.000 0.987 0.823
## Im7 1.158 0.065 17.869 0.000 1.142 0.944
## Luxury =~
## Im12 1.000 0.926 0.816
## Im13 1.192 0.068 17.626 0.000 1.104 0.917
## Professionalism =~
## Im16 1.000 0.919 0.764
## Im19 1.044 0.059 17.829 0.000 0.959 0.853
## COM =~
## COM_A1 1.000 1.144 0.796
## COM_A2 1.174 0.055 21.507 0.000 1.343 0.836
## COM_A3 1.162 0.058 20.036 0.000 1.329 0.817
## COM_A4 1.278 0.061 20.801 0.000 1.461 0.842
## SAT =~
## SAT_1 1.000 0.883 0.865
## SAT_2 0.932 0.049 18.885 0.000 0.823 0.819
## SAT_3 0.809 0.055 14.797 0.000 0.714 0.624
## C_REP =~
## C_REP1 1.000 0.596 0.816
## C_REP2 0.971 0.043 22.488 0.000 0.579 0.931
## C_REP3 0.702 0.037 19.041 0.000 0.419 0.756
## C_CR =~
## C_CR1 1.000 1.640 0.843
## C_CR2 0.559 0.051 10.907 0.000 0.917 0.491
## C_CR3 1.053 0.051 20.677 0.000 1.727 0.833
## C_CR4 0.971 0.048 20.200 0.000 1.593 0.804
##
## Regressions:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## COM ~
## Assortmnt (a) 0.105 0.050 2.103 0.035 0.119 0.119
## Frnch_lfs (b) 0.221 0.064 3.471 0.001 0.191 0.191
## Shp_xprnc (c) 0.372 0.052 7.206 0.000 0.410 0.410
## Organiztn (d) -0.022 0.054 -0.402 0.687 -0.024 -0.024
## Luxury (e) -0.167 0.073 -2.306 0.021 -0.135 -0.135
## Coolness (f) -0.007 0.058 -0.113 0.910 -0.007 -0.007
## Food (g) 0.027 0.083 0.324 0.746 0.019 0.019
## Prfssnlsm (h) 0.160 0.105 1.521 0.128 0.128 0.128
## SAT ~
## Assortmnt (i) 0.133 0.040 3.361 0.001 0.196 0.196
## Frnch_lfs (l) 0.103 0.049 2.103 0.035 0.116 0.116
## Shp_xprnc (m) 0.051 0.038 1.360 0.174 0.074 0.074
## Organiztn (n) -0.109 0.043 -2.533 0.011 -0.152 -0.152
## Luxury (o) -0.022 0.056 -0.395 0.693 -0.023 -0.023
## Coolness (p) 0.007 0.045 0.157 0.875 0.010 0.010
## Food (q) 0.078 0.064 1.212 0.225 0.071 0.071
## Prfssnlsm (r) 0.459 0.087 5.255 0.000 0.478 0.478
## C_REP ~
## Assortmnt (aa) -0.019 0.026 -0.720 0.472 -0.040 -0.040
## Frnch_lfs (bb) -0.034 0.033 -1.016 0.309 -0.056 -0.056
## Shp_xprnc (cc) 0.041 0.028 1.440 0.150 0.086 0.086
## Organiztn (dd) 0.009 0.029 0.311 0.756 0.019 0.019
## Luxury (ee) 0.063 0.038 1.663 0.096 0.098 0.098
## Coolness (ff) -0.014 0.030 -0.456 0.648 -0.028 -0.028
## Food (gg) 0.040 0.043 0.931 0.352 0.054 0.054
## Prfssnlsm (hh) -0.035 0.060 -0.587 0.557 -0.054 -0.054
## COM (u) 0.186 0.030 6.178 0.000 0.356 0.356
## SAT (v) 0.213 0.045 4.757 0.000 0.316 0.316
## C_CR ~
## Assortmnt (ii) -0.017 0.079 -0.216 0.829 -0.014 -0.014
## Frnch_lfs (ll) -0.132 0.103 -1.286 0.198 -0.079 -0.079
## Shp_xprnc (mm) 0.154 0.086 1.793 0.073 0.119 0.119
## Organiztn (nn) -0.034 0.089 -0.383 0.702 -0.026 -0.026
## Luxury (oo) 0.100 0.117 0.849 0.396 0.056 0.056
## Coolness (pp) 0.029 0.091 0.320 0.749 0.021 0.021
## Food (qq) -0.037 0.131 -0.283 0.777 -0.018 -0.018
## Prfssnlsm (rr) -0.161 0.181 -0.891 0.373 -0.090 -0.090
## COM (w) 0.549 0.090 6.097 0.000 0.383 0.383
## SAT (z) -0.331 0.129 -2.562 0.010 -0.178 -0.178
##
## Covariances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## Organization ~~
## Food 0.416 0.050 8.375 0.000 0.416 0.416
## Shop_experienc 0.728 0.082 8.916 0.000 0.467 0.467
## Coolness 0.769 0.076 10.120 0.000 0.515 0.515
## Assortment 0.709 0.079 9.023 0.000 0.441 0.441
## French_lifstyl 0.414 0.063 6.539 0.000 0.339 0.339
## Luxury 0.529 0.063 8.380 0.000 0.463 0.463
## Professionalsm 0.743 0.071 10.543 0.000 0.655 0.655
## Food ~~
## Shop_experienc 0.300 0.051 5.940 0.000 0.295 0.295
## Coolness 0.317 0.047 6.804 0.000 0.325 0.325
## Assortment 0.327 0.050 6.606 0.000 0.311 0.311
## French_lifstyl 0.468 0.047 10.023 0.000 0.587 0.587
## Luxury 0.310 0.042 7.420 0.000 0.414 0.414
## Professionalsm 0.370 0.043 8.600 0.000 0.499 0.499
## Shop_experience ~~
## Coolness 0.784 0.081 9.702 0.000 0.514 0.514
## Assortment 0.734 0.084 8.694 0.000 0.447 0.447
## French_lifstyl 0.415 0.065 6.399 0.000 0.333 0.333
## Luxury 0.475 0.065 7.329 0.000 0.407 0.407
## Professionalsm 0.552 0.068 8.102 0.000 0.476 0.476
## Coolness ~~
## Assortment 0.815 0.079 10.364 0.000 0.519 0.519
## French_lifstyl 0.389 0.061 6.353 0.000 0.326 0.326
## Luxury 0.647 0.064 10.081 0.000 0.578 0.578
## Professionalsm 0.668 0.066 10.116 0.000 0.601 0.601
## Assortment ~~
## French_lifstyl 0.292 0.061 4.778 0.000 0.227 0.227
## Luxury 0.588 0.066 8.939 0.000 0.488 0.488
## Professionalsm 0.717 0.071 10.043 0.000 0.600 0.600
## French_lifestyle ~~
## Luxury 0.260 0.048 5.405 0.000 0.284 0.284
## Professionalsm 0.335 0.051 6.573 0.000 0.370 0.370
## Luxury ~~
## Professionalsm 0.442 0.053 8.264 0.000 0.520 0.520
## .C_REP ~~
## .C_CR -0.006 0.038 -0.148 0.882 -0.008 -0.008
##
## Intercepts:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .Im3 4.995 0.056 88.568 0.000 4.995 3.786
## .Im4 4.998 0.057 86.998 0.000 4.998 3.713
## .Im5 5.035 0.057 87.850 0.000 5.035 3.787
## .Im10 6.100 0.037 162.768 0.000 6.100 6.936
## .Im14 6.138 0.037 165.824 0.000 6.138 7.091
## .Im20 4.672 0.064 73.217 0.000 4.672 3.125
## .Im21 5.139 0.058 87.977 0.000 5.139 3.750
## .Im22 4.280 0.065 65.482 0.000 4.280 2.802
## .Im17 5.025 0.053 94.561 0.000 5.025 4.043
## .Im18 4.595 0.060 76.468 0.000 4.595 3.288
## .Im1 4.792 0.057 84.290 0.000 4.792 3.600
## .Im2 4.858 0.055 88.416 0.000 4.858 3.781
## .Im6 5.828 0.051 113.798 0.000 5.828 4.858
## .Im7 5.754 0.052 110.818 0.000 5.754 4.756
## .Im12 5.664 0.049 116.109 0.000 5.664 4.989
## .Im13 5.447 0.052 105.621 0.000 5.447 4.524
## .Im16 5.135 0.052 99.187 0.000 5.135 4.269
## .Im19 5.145 0.048 107.019 0.000 5.145 4.576
## .COM_A1 4.287 0.061 69.747 0.000 4.287 2.983
## .COM_A2 3.887 0.069 56.668 0.000 3.887 2.420
## .COM_A3 3.543 0.070 50.860 0.000 3.543 2.178
## .COM_A4 3.456 0.074 46.672 0.000 3.456 1.991
## .SAT_1 5.344 0.043 122.953 0.000 5.344 5.239
## .SAT_2 5.482 0.043 127.744 0.000 5.482 5.455
## .SAT_3 5.458 0.050 109.428 0.000 5.458 4.773
## .C_REP1 4.283 0.031 137.516 0.000 4.283 5.860
## .C_REP2 4.507 0.027 169.661 0.000 4.507 7.250
## .C_REP3 4.677 0.024 196.945 0.000 4.677 8.446
## .C_CR1 2.677 0.083 32.104 0.000 2.677 1.376
## .C_CR2 4.616 0.081 56.849 0.000 4.616 2.472
## .C_CR3 3.262 0.088 36.909 0.000 3.262 1.574
## .C_CR4 2.787 0.085 32.923 0.000 2.787 1.406
## Organization 0.000 0.000 0.000
## Food 0.000 0.000 0.000
## Shop_experienc 0.000 0.000 0.000
## Coolness 0.000 0.000 0.000
## Assortment 0.000 0.000 0.000
## French_lifstyl 0.000 0.000 0.000
## Luxury 0.000 0.000 0.000
## Professionalsm 0.000 0.000 0.000
## .COM 0.000 0.000 0.000
## .SAT 0.000 0.000 0.000
## .C_REP 0.000 0.000 0.000
## .C_CR 0.000 0.000 0.000
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .Im3 0.214 0.024 8.793 0.000 0.214 0.123
## .Im4 0.108 0.024 4.484 0.000 0.108 0.059
## .Im5 0.747 0.049 15.220 0.000 0.747 0.423
## .Im10 0.120 0.019 6.209 0.000 0.120 0.155
## .Im14 0.064 0.019 3.322 0.001 0.064 0.085
## .Im20 0.645 0.060 10.838 0.000 0.645 0.288
## .Im21 0.708 0.056 12.623 0.000 0.708 0.377
## .Im22 0.557 0.061 9.063 0.000 0.557 0.239
## .Im17 0.084 0.045 1.876 0.061 0.084 0.055
## .Im18 0.532 0.054 9.762 0.000 0.532 0.272
## .Im1 0.080 0.047 1.719 0.086 0.080 0.045
## .Im2 0.309 0.041 7.481 0.000 0.309 0.187
## .Im6 0.465 0.053 8.801 0.000 0.465 0.323
## .Im7 0.159 0.060 2.633 0.008 0.159 0.109
## .Im12 0.431 0.047 9.150 0.000 0.431 0.335
## .Im13 0.231 0.057 4.025 0.000 0.231 0.159
## .Im16 0.603 0.050 11.936 0.000 0.603 0.417
## .Im19 0.344 0.044 7.886 0.000 0.344 0.272
## .COM_A1 0.756 0.058 12.960 0.000 0.756 0.366
## .COM_A2 0.778 0.065 11.910 0.000 0.778 0.302
## .COM_A3 0.879 0.070 12.494 0.000 0.879 0.332
## .COM_A4 0.877 0.075 11.736 0.000 0.877 0.291
## .SAT_1 0.261 0.034 7.701 0.000 0.261 0.251
## .SAT_2 0.333 0.033 9.965 0.000 0.333 0.330
## .SAT_3 0.798 0.056 14.348 0.000 0.798 0.610
## .C_REP1 0.179 0.016 11.305 0.000 0.179 0.335
## .C_REP2 0.051 0.010 4.938 0.000 0.051 0.132
## .C_REP3 0.131 0.009 14.061 0.000 0.131 0.428
## .C_CR1 1.094 0.108 10.092 0.000 1.094 0.289
## .C_CR2 2.646 0.172 15.377 0.000 2.646 0.759
## .C_CR3 1.313 0.126 10.455 0.000 1.313 0.306
## .C_CR4 1.390 0.119 11.681 0.000 1.390 0.354
## Organization 1.526 0.107 14.318 0.000 1.000 1.000
## Food 0.654 0.049 13.224 0.000 1.000 1.000
## Shop_experienc 1.591 0.137 11.655 0.000 1.000 1.000
## Coolness 1.461 0.104 14.094 0.000 1.000 1.000
## Assortment 1.692 0.117 14.494 0.000 1.000 1.000
## French_lifstyl 0.974 0.094 10.413 0.000 1.000 1.000
## Luxury 0.858 0.084 10.242 0.000 1.000 1.000
## Professionalsm 0.844 0.087 9.717 0.000 1.000 1.000
## .COM 0.859 0.086 10.037 0.000 0.657 0.657
## .SAT 0.450 0.047 9.467 0.000 0.577 0.577
## .C_REP 0.237 0.022 10.940 0.000 0.667 0.667
## .C_CR 2.237 0.204 10.957 0.000 0.831 0.831
##
## Defined Parameters:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## A_COM 0.105 0.050 2.103 0.035 0.119 0.119
## Fr_COM 0.221 0.064 3.471 0.001 0.191 0.191
## Sh_COM 0.372 0.052 7.206 0.000 0.410 0.410
## Org_COM -0.022 0.054 -0.402 0.687 -0.024 -0.024
## Lux_COM -0.167 0.073 -2.306 0.021 -0.135 -0.135
## Cool_COM -0.007 0.058 -0.113 0.910 -0.007 -0.007
## Food_COM 0.027 0.083 0.324 0.746 0.019 0.019
## Prof_COM 0.160 0.105 1.521 0.128 0.128 0.128
## A_SAT 0.133 0.040 3.361 0.001 0.196 0.196
## Fr_SAT 0.103 0.049 2.103 0.035 0.116 0.116
## Sh_SAT 0.051 0.038 1.360 0.174 0.074 0.074
## Org_SAT -0.109 0.043 -2.533 0.011 -0.152 -0.152
## Lux_SAT -0.022 0.056 -0.395 0.693 -0.023 -0.023
## Cool_SAT 0.007 0.045 0.157 0.875 0.010 0.010
## Food_SAT 0.078 0.064 1.212 0.225 0.071 0.071
## Prof_SAT 0.459 0.087 5.255 0.000 0.478 0.478
## ua 0.019 0.010 1.996 0.046 0.043 0.043
## ub 0.041 0.014 3.038 0.002 0.068 0.068
## uc 0.069 0.014 4.809 0.000 0.146 0.146
## ud -0.004 0.010 -0.402 0.688 -0.008 -0.008
## ue -0.031 0.014 -2.157 0.031 -0.048 -0.048
## uf -0.001 0.011 -0.113 0.910 -0.002 -0.002
## ug 0.005 0.015 0.323 0.747 0.007 0.007
## uh 0.030 0.020 1.478 0.140 0.046 0.046
## wa 0.058 0.029 2.003 0.045 0.046 0.046
## wb 0.121 0.040 3.027 0.002 0.073 0.073
## wc 0.204 0.042 4.798 0.000 0.157 0.157
## wd -0.012 0.030 -0.401 0.688 -0.009 -0.009
## we -0.092 0.043 -2.158 0.031 -0.052 -0.052
## wf -0.004 0.032 -0.113 0.910 -0.003 -0.003
## wg 0.015 0.046 0.323 0.746 0.007 0.007
## wh 0.088 0.060 1.468 0.142 0.049 0.049
## vi 0.028 0.010 2.823 0.005 0.062 0.062
## vl 0.022 0.011 1.935 0.053 0.036 0.036
## vm 0.011 0.008 1.316 0.188 0.023 0.023
## vn -0.023 0.010 -2.225 0.026 -0.048 -0.048
## vo -0.005 0.012 -0.394 0.694 -0.007 -0.007
## vp 0.002 0.010 0.157 0.875 0.003 0.003
## vq 0.017 0.014 1.179 0.239 0.023 0.023
## vr 0.098 0.028 3.477 0.001 0.151 0.151
## zi -0.044 0.022 -2.030 0.042 -0.035 -0.035
## zl -0.034 0.021 -1.636 0.102 -0.021 -0.021
## zm -0.017 0.014 -1.189 0.234 -0.013 -0.013
## zn 0.036 0.020 1.810 0.070 0.027 0.027
## zo 0.007 0.019 0.391 0.696 0.004 0.004
## zp -0.002 0.015 -0.157 0.875 -0.002 -0.002
## zq -0.026 0.024 -1.091 0.275 -0.013 -0.013
## zr -0.152 0.065 -2.330 0.020 -0.085 -0.085
## A_REP 0.029 0.028 1.068 0.285 0.064 0.064
## Fr_REP 0.029 0.035 0.833 0.405 0.049 0.049
## Sh_REP 0.121 0.028 4.277 0.000 0.255 0.255
## Org_REP -0.018 0.030 -0.599 0.549 -0.038 -0.038
## Lux_REP 0.027 0.041 0.671 0.502 0.042 0.042
## Cool_REP -0.013 0.032 -0.412 0.680 -0.027 -0.027
## Food_REP 0.062 0.046 1.328 0.184 0.083 0.083
## Prof_REP 0.092 0.058 1.582 0.114 0.142 0.142
## A_CR -0.004 0.082 -0.045 0.964 -0.003 -0.003
## Fr_CR -0.045 0.105 -0.427 0.669 -0.027 -0.027
## Sh_CR 0.341 0.083 4.129 0.000 0.262 0.262
## Org_CR -0.010 0.091 -0.110 0.912 -0.008 -0.008
## Lux_CR 0.015 0.121 0.125 0.900 0.009 0.009
## Cool_CR 0.023 0.095 0.243 0.808 0.017 0.017
## Food_CR -0.048 0.137 -0.352 0.725 -0.024 -0.024
## Prof_CR -0.226 0.170 -1.332 0.183 -0.127 -0.127
fitsumpe <- fitsum$pe
The model remains of good fit:
All latent variables have std.all above 0.6.
We plot the model using lavaanPlot, which gives the
correct structure of our model. However, it displays no estimated
parameters.
lavaanPlot(name = "plot", fit3, labels = NULL)
The semPaths plot instead gives us the parameters, but
the two mediators are not displayed as such.
semPaths(fit3,what = "path", whatLabels = "std", style = "lisrel",exoCov = T,
rotation = 2, layout = "tree", mar = c(1, 2, 1, 2),
nCharNodes = 7,shapeMan = "rectangle", sizeMan = 7, sizeMan2 = 5,
curvePivot = TRUE, edge.label.cex = 1, edge.color = "darkblue")
Construct reliability can be measured either with Cronbach’s alpha or with Composite Reliability (CR). The former is less reliable of a measure. Regardless, if either one are above 0.95 it means that the observables forming the factor are measuring the same thing, making them redundant.
Cronbach’s alpha
cronbach(subset(df1, select = c(Im1,Im2)))$alpha
## [1] 0.9372013
cronbach(subset(df1, select = c(Im6,Im7)))$alpha
## [1] 0.8758912
cronbach(subset(df1, select = c(Im20,Im21,Im22)))$alpha
## [1] 0.8749604
cronbach(subset(df1, select = c(Im3,Im4,Im5)))$alpha
## [1] 0.9151505
cronbach(subset(df1, select = c(Im12,Im13)))$alpha
## [1] 0.8540007
cronbach(subset(df1, select = c(Im17,Im18)))$alpha
## [1] 0.9039139
cronbach(subset(df1, select = c(Im10,Im14)))$alpha
## [1] 0.9334071
cronbach(subset(df1, select = c(Im16,Im19)))$alpha
## [1] 0.7940545
cronbach(subset(df1, select = c(COM_A1:COM_A4)))$alpha
## [1] 0.8929881
cronbach(subset(df1, select = c(SAT_1:SAT_3)))$alpha
## [1] 0.7994863
cronbach(subset(df1, select = c(C_CR1:C_CR4)))$alpha
## [1] 0.8322713
cronbach(subset(df1, select = c(C_REP1:C_REP3)))$alpha
## [1] 0.860796
The alphas of the factors are high, all over the 0.70 benchmark. The items are sufficiently reliable. None is above 0.95, the questions describe the latent factor well without being redundant.
Composite Reliability
# computing needed parameters
std.loadings <- inspect(fit3, what = "std")$lambda
check = std.loadings
check[check > 0] <- 1
std.loadings[std.loadings == 0] <- NA
std.loadings2 <- std.loadings^2
std.theta <- inspect(fit3, what = "std")$theta
#CR
sum.std.loadings <- colSums(std.loadings, na.rm = TRUE)^2
sum.std.theta <- rowSums(std.theta)
sum.std.theta = check*sum.std.theta
CR = sum.std.loadings/(sum.std.loadings + colSums(sum.std.theta))
CR
## Organization Food Shop_experience Coolness
## 0.9215463 0.9361249 0.8740874 0.9106537
## Assortment French_lifestyle Luxury Professionalism
## 0.9381384 0.8784742 0.8587171 0.7915213
## COM SAT C_REP C_CR
## 0.8934628 0.8173352 0.8749171 0.8379562
# a function to compute them exists as well
# semTools::compRelSEM(fit3)
The CR values are high enough, without signaling inner redundancy.
Indicator Reliability
It is computed as the true score variance divided by total variance.
#Individual item Reliability
IIR = std.loadings2/(colSums(std.theta) + std.loadings2)
IIR
## Orgnzt Food Shp_xp Colnss Assrtm Frnch_ Luxury Prfssn COM SAT C_REP
## Im3 0.877 NA NA NA NA NA NA NA NA NA NA
## Im4 0.941 NA NA NA NA NA NA NA NA NA NA
## Im5 0.577 NA NA NA NA NA NA NA NA NA NA
## Im10 NA 0.845 NA NA NA NA NA NA NA NA NA
## Im14 NA 0.915 NA NA NA NA NA NA NA NA NA
## Im20 NA NA 0.712 NA NA NA NA NA NA NA NA
## Im21 NA NA 0.623 NA NA NA NA NA NA NA NA
## Im22 NA NA 0.761 NA NA NA NA NA NA NA NA
## Im17 NA NA NA 0.945 NA NA NA NA NA NA NA
## Im18 NA NA NA 0.728 NA NA NA NA NA NA NA
## Im1 NA NA NA NA 0.955 NA NA NA NA NA NA
## Im2 NA NA NA NA 0.813 NA NA NA NA NA NA
## Im6 NA NA NA NA NA 0.677 NA NA NA NA NA
## Im7 NA NA NA NA NA 0.891 NA NA NA NA NA
## Im12 NA NA NA NA NA NA 0.665 NA NA NA NA
## Im13 NA NA NA NA NA NA 0.841 NA NA NA NA
## Im16 NA NA NA NA NA NA NA 0.583 NA NA NA
## Im19 NA NA NA NA NA NA NA 0.728 NA NA NA
## COM_A1 NA NA NA NA NA NA NA NA 0.634 NA NA
## COM_A2 NA NA NA NA NA NA NA NA 0.698 NA NA
## COM_A3 NA NA NA NA NA NA NA NA 0.668 NA NA
## COM_A4 NA NA NA NA NA NA NA NA 0.709 NA NA
## SAT_1 NA NA NA NA NA NA NA NA NA 0.749 NA
## SAT_2 NA NA NA NA NA NA NA NA NA 0.670 NA
## SAT_3 NA NA NA NA NA NA NA NA NA 0.390 NA
## C_REP1 NA NA NA NA NA NA NA NA NA NA 0.665
## C_REP2 NA NA NA NA NA NA NA NA NA NA 0.868
## C_REP3 NA NA NA NA NA NA NA NA NA NA 0.572
## C_CR1 NA NA NA NA NA NA NA NA NA NA NA
## C_CR2 NA NA NA NA NA NA NA NA NA NA NA
## C_CR3 NA NA NA NA NA NA NA NA NA NA NA
## C_CR4 NA NA NA NA NA NA NA NA NA NA NA
## C_CR
## Im3 NA
## Im4 NA
## Im5 NA
## Im10 NA
## Im14 NA
## Im20 NA
## Im21 NA
## Im22 NA
## Im17 NA
## Im18 NA
## Im1 NA
## Im2 NA
## Im6 NA
## Im7 NA
## Im12 NA
## Im13 NA
## Im16 NA
## Im19 NA
## COM_A1 NA
## COM_A2 NA
## COM_A3 NA
## COM_A4 NA
## SAT_1 NA
## SAT_2 NA
## SAT_3 NA
## C_REP1 NA
## C_REP2 NA
## C_REP3 NA
## C_CR1 0.711
## C_CR2 0.241
## C_CR3 0.694
## C_CR4 0.646
We observe that the reliability for C_CR2 is low (below
0.4). In the context of exploratory factor analysis, we could argue that
it could be excluded give that its loading onto C_CR is
low, when looking at the measurement model outputs. We notice that
SAT_3 is also below 0.4, but still very close to the
cutoff, which is less worrying.
The Average Variance Extracted (AVE) is the measure of the amount of variance that is captured by a construct in relation to the amount of variance due to measurement error.
std.loadings <- inspect(fit3, what = "std")$lambda
std.loadings <- std.loadings^2
AVE = colSums(std.loadings)/(colSums(sum.std.theta) + colSums(std.loadings))
AVE
## Organization Food Shop_experience Coolness
## 0.7982853 0.8799618 0.6986104 0.8365463
## Assortment French_lifestyle Luxury Professionalism
## 0.8836510 0.7840853 0.7530485 0.6556625
## COM SAT C_REP C_CR
## 0.6771686 0.6030053 0.7014335 0.5731559
# a function to compute them exists as well
# semTools::AVE(fit3)
We want the AVE to be above 0.5. Fortunately, it is the case for all
latent variables, with C_CR being the closest to the
cutoff.
Discriminant validity measures the distinctiveness of a construct, hence we want to check whether the constructs are highly intercorrelated or not. The former case implies that we have separated a construct into two or more factors, when it should have been designed as one larger construct. Discriminant validity is demonstrated when the shared variance within a construct (AVE) exceeds the shared variance between the constructs.
In practice, we check that the AVE of a construct is greater than the square correlations of the construct with all other constructs.
# std contains a list of model matrices
# of the completely standardized model parameters
std_fit1 = inspect(fit3, "std")
std_fit1$psi^2
## Orgnzt Food Shp_xp Colnss Assrtm Frnch_ Luxury Prfssn COM
## Organization 1.000
## Food 0.173 1.000
## Shop_experience 0.218 0.087 1.000
## Coolness 0.265 0.105 0.265 1.000
## Assortment 0.195 0.097 0.200 0.269 1.000
## French_lifestyle 0.115 0.345 0.111 0.106 0.052 1.000
## Luxury 0.214 0.171 0.166 0.334 0.239 0.081 1.000
## Professionalism 0.429 0.249 0.227 0.362 0.360 0.137 0.270 1.000
## COM 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.431
## SAT 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## C_REP 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## C_CR 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## SAT C_REP C_CR
## Organization
## Food
## Shop_experience
## Coolness
## Assortment
## French_lifestyle
## Luxury
## Professionalism
## COM
## SAT 0.333
## C_REP 0.000 0.445
## C_CR 0.000 0.000 0.691
The diagonals are the variances of the constructs ( the diagonal being of all 1s, means we are looking at a correlation matrix not a covariance one), which are in fact 1 because they have been standardized for identification. We compare the AVE values with the values below the diagonal. The squared correlations between constructs are much lower than AVE values, so there is discriminant validity. The constructs we designed are distinct and valid.
To Answer the first portion of this question we need to look into the
output for the regression paths (structural models) connecting the 8
exogenous factors we designed and the mediators factors. The outputs can
be interpreted as any regular lm() model , therefore we
look at the Estimate column, not at the std.all column. We
remind the reader that the standardized loadings tell us the amount of
variance in the item explained through the construct.
One thing we will have to look out for is that not all our hypotheses might be confirmed, some estimates might seem counter-intuitive.
The parameters of the model are stored inside our
fitsumpe table, which is the parameters printed by the
summary()function for our model. We display the rows
corresponding to the eight factors driving COM.
fitsumpe[33:40,] %>% kable(digits = 3) %>% kable_styling(full_width = TRUE)
| lhs | op | rhs | label | exo | est | se | z | pvalue | std.lv | std.all | std.nox | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 33 | COM | ~ | Assortment | a | 0 | 0.105 | 0.050 | 2.103 | 0.035 | 0.119 | 0.119 | 0.119 |
| 34 | COM | ~ | French_lifestyle | b | 0 | 0.221 | 0.064 | 3.471 | 0.001 | 0.191 | 0.191 | 0.191 |
| 35 | COM | ~ | Shop_experience | c | 0 | 0.372 | 0.052 | 7.206 | 0.000 | 0.410 | 0.410 | 0.410 |
| 36 | COM | ~ | Organization | d | 0 | -0.022 | 0.054 | -0.402 | 0.687 | -0.024 | -0.024 | -0.024 |
| 37 | COM | ~ | Luxury | e | 0 | -0.167 | 0.073 | -2.306 | 0.021 | -0.135 | -0.135 | -0.135 |
| 38 | COM | ~ | Coolness | f | 0 | -0.007 | 0.058 | -0.113 | 0.910 | -0.007 | -0.007 | -0.007 |
| 39 | COM | ~ | Food | g | 0 | 0.027 | 0.083 | 0.324 | 0.746 | 0.019 | 0.019 | 0.019 |
| 40 | COM | ~ | Professionalism | h | 0 | 0.160 | 0.105 | 1.521 | 0.128 | 0.128 | 0.128 | 0.128 |
The regression estimates, under column est, can be
visualized using a bar chart.
ggplot(fitsumpe[33:40,], aes(x = reorder(rhs, -est),
y = est,
fill = ifelse(pvalue < 0.05, "Significant", "Not significant"))) +
geom_col() +
theme_classic() +
theme(axis.text.x = element_text(angle = 45, vjust = 0.5, hjust = 0.5)) +
geom_hline(yintercept = 0) +
labs(x = "Factors",
y = "Regression estimate",
fill = "p value",
title = "Regression estimates of factors on Affective Commitment")
We immediately see that the factors with significant estimates are Shopping Experience, French Lifestyle, Assortment and Luxury.
Among the client perception factors, the one with the strongest effect on Affective Commitment is Shopping Experience, in a positive direction. For the client having a relaxing and intimate shopping experience builds affective commitment, this is intuitive as having a nice time at the Galeries will influence attachment. A nice memory of the time spent there adds positive connotations to the brand. Moreover, the Shopping Experience seems to be the only strongly significant estimate within the model.
Other significant factors are French Lifestyle, Assortment and Luxury. For the first factor, we can deduce that people shopping at les Galeries in Berlin are maybe looking for a particular experience, one that is centered around a French way of living. Therefore, finding this element satisfied results in a grown affective attachment to the brand. Assortment drives attachment, if the client knows they will find what they are looking for or they conclude a purchase that was highly satisfying for them this influences attachment positively. What seems to be counter-intuitive is the negative coefficient attached to the Luxury factor, but we could perhaps infer that a certain kind of ambiance in the store might intimidate the customer, or that too much luxury makes the brand be perceived as colder, more detached and elitist.
Factors that do not seem to drive Affective Commitment are Professionalism, Food, Coolness and Organization.
We retrieve the parameters regarding SAT inside our
parameters table.
fitsumpe[41:48,] %>% kable(digits = 3) %>% kable_styling(full_width = TRUE)
| lhs | op | rhs | label | exo | est | se | z | pvalue | std.lv | std.all | std.nox | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 41 | SAT | ~ | Assortment | i | 0 | 0.133 | 0.040 | 3.361 | 0.001 | 0.196 | 0.196 | 0.196 |
| 42 | SAT | ~ | French_lifestyle | l | 0 | 0.103 | 0.049 | 2.103 | 0.035 | 0.116 | 0.116 | 0.116 |
| 43 | SAT | ~ | Shop_experience | m | 0 | 0.051 | 0.038 | 1.360 | 0.174 | 0.074 | 0.074 | 0.074 |
| 44 | SAT | ~ | Organization | n | 0 | -0.109 | 0.043 | -2.533 | 0.011 | -0.152 | -0.152 | -0.152 |
| 45 | SAT | ~ | Luxury | o | 0 | -0.022 | 0.056 | -0.395 | 0.693 | -0.023 | -0.023 | -0.023 |
| 46 | SAT | ~ | Coolness | p | 0 | 0.007 | 0.045 | 0.157 | 0.875 | 0.010 | 0.010 | 0.010 |
| 47 | SAT | ~ | Food | q | 0 | 0.078 | 0.064 | 1.212 | 0.225 | 0.071 | 0.071 | 0.071 |
| 48 | SAT | ~ | Professionalism | r | 0 | 0.459 | 0.087 | 5.255 | 0.000 | 0.478 | 0.478 | 0.478 |
ggplot(fitsumpe[41:48,], aes(x = reorder(rhs, -est),
y = est,
fill = ifelse(pvalue < 0.05, "Significant", "Not significant"))) +
geom_col() +
theme_classic() +
theme(axis.text.x = element_text(angle = 45, vjust = 0.5, hjust = 0.5)) +
geom_hline(yintercept = 0) +
labs(x = "Factors",
y = "Regression estimate",
fill = "p value",
title = "Regression estimates of factors on Satisfaction")
We immediately see that the factors with significant estimates are Professionalism, Assortment, French Lifestyle and Organization.
Regarding Assortment and French Lifestyle, we can identify the same dynamics we mentioned for Affective Commitment, i.e. looking for a specific product or a certain style are key elements for the client satisfaction. Professionalism is the most significant element, such a dynamic seems very intuitive, as professionalism influences service quality and therefore customer satisfaction with the service.
Organization very counter-intuitively has a negative effect on customer satisfaction, this could be an indication to change something about the organization and arrangement of the store.
The constructs that do not seem to factor into customer satisfaction are Food, Shopping Experience, Coolness and Luxury.
The mechanisms driving the two mediating factors are similar in some instances, we have seen how Assortment and French Lifestyle are important elements in both cases. The variety in the offer and the identification of the brand as something typically French are driving forces of customers’ attachment and satisfaction. There are differences as well, Professionalism is not relevant when we observe customers’ Affective Commitment, and neither is organization, but both of these constructs are actually important when considering satisfaction.
COM
and SAT mediating?We display the regression estimates for C_CR and
C_REP with respect to the eight factors, COM
and SAT.
fitsumpe[59:68,] %>% kable(digits = 3) %>% kable_styling(full_width = TRUE)
| lhs | op | rhs | label | exo | est | se | z | pvalue | std.lv | std.all | std.nox | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 59 | C_CR | ~ | Assortment | ii | 0 | -0.017 | 0.079 | -0.216 | 0.829 | -0.014 | -0.014 | -0.014 |
| 60 | C_CR | ~ | French_lifestyle | ll | 0 | -0.132 | 0.103 | -1.286 | 0.198 | -0.079 | -0.079 | -0.079 |
| 61 | C_CR | ~ | Shop_experience | mm | 0 | 0.154 | 0.086 | 1.793 | 0.073 | 0.119 | 0.119 | 0.119 |
| 62 | C_CR | ~ | Organization | nn | 0 | -0.034 | 0.089 | -0.383 | 0.702 | -0.026 | -0.026 | -0.026 |
| 63 | C_CR | ~ | Luxury | oo | 0 | 0.100 | 0.117 | 0.849 | 0.396 | 0.056 | 0.056 | 0.056 |
| 64 | C_CR | ~ | Coolness | pp | 0 | 0.029 | 0.091 | 0.320 | 0.749 | 0.021 | 0.021 | 0.021 |
| 65 | C_CR | ~ | Food | 0 | -0.037 | 0.131 | -0.283 | 0.777 | -0.018 | -0.018 | -0.018 | |
| 66 | C_CR | ~ | Professionalism | rr | 0 | -0.161 | 0.181 | -0.891 | 0.373 | -0.090 | -0.090 | -0.090 |
| 67 | C_CR | ~ | COM | w | 0 | 0.549 | 0.090 | 6.097 | 0.000 | 0.383 | 0.383 | 0.383 |
| 68 | C_CR | ~ | SAT | z | 0 | -0.331 | 0.129 | -2.562 | 0.010 | -0.178 | -0.178 | -0.178 |
fitsumpe[49:58,] %>% kable(digits = 3) %>% kable_styling(full_width = TRUE)
| lhs | op | rhs | label | exo | est | se | z | pvalue | std.lv | std.all | std.nox | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 49 | C_REP | ~ | Assortment | aa | 0 | -0.019 | 0.026 | -0.720 | 0.472 | -0.040 | -0.040 | -0.040 |
| 50 | C_REP | ~ | French_lifestyle | bb | 0 | -0.034 | 0.033 | -1.016 | 0.309 | -0.056 | -0.056 | -0.056 |
| 51 | C_REP | ~ | Shop_experience | cc | 0 | 0.041 | 0.028 | 1.440 | 0.150 | 0.086 | 0.086 | 0.086 |
| 52 | C_REP | ~ | Organization | dd | 0 | 0.009 | 0.029 | 0.311 | 0.756 | 0.019 | 0.019 | 0.019 |
| 53 | C_REP | ~ | Luxury | ee | 0 | 0.063 | 0.038 | 1.663 | 0.096 | 0.098 | 0.098 | 0.098 |
| 54 | C_REP | ~ | Coolness | ff | 0 | -0.014 | 0.030 | -0.456 | 0.648 | -0.028 | -0.028 | -0.028 |
| 55 | C_REP | ~ | Food | gg | 0 | 0.040 | 0.043 | 0.931 | 0.352 | 0.054 | 0.054 | 0.054 |
| 56 | C_REP | ~ | Professionalism | hh | 0 | -0.035 | 0.060 | -0.587 | 0.557 | -0.054 | -0.054 | -0.054 |
| 57 | C_REP | ~ | COM | u | 0 | 0.186 | 0.030 | 6.178 | 0.000 | 0.356 | 0.356 | 0.356 |
| 58 | C_REP | ~ | SAT | v | 0 | 0.213 | 0.045 | 4.757 | 0.000 | 0.316 | 0.316 | 0.316 |
Yes, in both regression paths the estimates corresponding to Affective Commitment and Satisfaction are significant. Repurchase intention is positively driven by both mediators, whilst Co-creation intention is negatively driven by satisfaction, which it seems logical because the more satisfied the customer is the least motivated to share ideas they are.
Clients who have had positive experiences at the Galeries and have developed an affective attachment to the store are more likely to re-purchase from there.
Furthermore, it is important to note that none of the eight factors
we have designed have a significant regression coefficient, their \(p\)-values are all above 0.05. This means
that the direct effects of the factors on C_CR and
C_REP are insignificant and that COM and
SAT efficiently mediate all the effect of the eight
factors.
For this we do not have to look into the regression path results only, we need to investigate total effects of the factors on the outcomes, those being indirect plus direct effects.
In construct3 we defined the total effects for
Repurchase Intention:
| Factors | Formula |
|---|---|
| Assortment to C_REP | \(A\_REP := aa+ua+vi\) |
| French lifestyle to C_REP | \(Fr\_REP:=bb+ub+vl\) |
| Shopping Experience to C_REP | \(Sh\_REP:=cc+uc+vm\) |
| Organisation to C_REP | \(Org\_REP:=dd+ud+vn\) |
| Luxury to C_REP | \(Lux\_REP:=ee+ue+vo\) |
| Coolness to C_REP | \(Cool\_REP:=ff+uf+vp\) |
| Food to C_REP | \(Food\_REP:=gg+ug+vq\) |
| Professionalism to C_REP | \(Prof\_REP:=hh+uh+vr\) |
estimates <- parameterestimates(fit3, boot.ci.type = "bca.simple", standardized = TRUE)
estimates[c(234:241),c(1,5,8)] %>% kable(digits = 3) %>% kable_styling(full_width = FALSE)
| lhs | est | pvalue | |
|---|---|---|---|
| 234 | A_REP | 0.029 | 0.285 |
| 235 | Fr_REP | 0.029 | 0.405 |
| 236 | Sh_REP | 0.121 | 0.000 |
| 237 | Org_REP | -0.018 | 0.549 |
| 238 | Lux_REP | 0.027 | 0.502 |
| 239 | Cool_REP | -0.013 | 0.680 |
| 240 | Food_REP | 0.062 | 0.184 |
| 241 | Prof_REP | 0.092 | 0.114 |
First thing we report is the \(p\)-value of these estimates, only Shopping Experience is showing significance. As the null hypothesis of the test is \[H_0: est = 0\] a \(p\)-value above 0.05 indicates that we cannot reject the null hypothesis. Thus, the estimates computed for the effects with a non-significative \(p\)-value cannot be distinguished from zero. As such, from a statistical point of view, they cannot be compared or ranked together. The rankings are only truly valid when comparing statistically significant estimates.
As the question asks for a ranking of the estimates, we still provide one, but the reader should keep in mind that only Shopping Experience can be declared above all the others, and no true order can be established between non-significant estimates.
In descending order the client perception factors in terms of total effect:
Shopping Experience: the experience that the client has in the Galeries is very important, if they had a good and relaxing time they will be more likely to come back. The store is a place where people most likely spend free time if the experience resulted unpleasant and stressful then the idea of visiting the store again would not even be considered by the individual.
Professionalism;
Food;
Assortment;
French Lifestyle;
Luxury;
Organisation: it does not stimulate repurchase intention, which is quite logical after all arrangement and window organisation are not necessarily what bring the client to come back for more, logically speaking.
Coolness: reports a negative effect on Repurchase Intention, the effect is not significant, but we can resonate that having the most on trend items and brands does not necessarily means commitment from the client. Having a sale space inside the Galeries is a long term commitment and maybe the popularity of a certain brand might only last a season.
We defined the total effects for Co-Creation Intention:
| Factors | Formula |
|---|---|
| Assortment to C_CR | \(A\_CR:=ii+wa+zi\) |
| French lifestyle to C_CR | \(Fr\_CR:=ll+wb+z\) |
| Shopping Experience to C_CR | \(Sh\_CR:=mm+wc+zm\) |
| Organisation to C_CR | \(Org\_CR:=nn+wd+zn\) |
| Luxury to C_CR | \(Lux\_CR:=oo+we+zo\) |
| Coolness to C_CR | \(Cool\_CR:=pp+wf+zp\) |
| Food to C_CR | \(Food\_CR:=qq+wg+zq\) |
| Professionalism to C_CR | \(Prof\_CR:=rr+wh+zr\) |
estimates[c(242:249),c(1,5,8)] %>% kable(digits = 3) %>% kable_styling(full_width = FALSE)
| lhs | est | pvalue | |
|---|---|---|---|
| 242 | A_CR | -0.004 | 0.964 |
| 243 | Fr_CR | -0.045 | 0.669 |
| 244 | Sh_CR | 0.341 | 0.000 |
| 245 | Org_CR | -0.010 | 0.912 |
| 246 | Lux_CR | 0.015 | 0.900 |
| 247 | Cool_CR | 0.023 | 0.808 |
| 248 | Food_CR | -0.048 | 0.725 |
| 249 | Prof_CR | -0.226 | 0.183 |
For Co-Creation as well what manages to be significant in terms of total effects is the Shopping Experience. As before, we remind that the ranking of insignificant estimates isn’t correct from a statistical point of view.
Here the survey did not mention any online shopping platform, so the in-presence experience is very important. Moreover, we also have to consider that a lot of the direct effects from factors to outcome has negative estimates.
In descending order the client perception factors in terms of total effect:
Shopping Experience: it has a positive effect on the client intention to participate in new surveys and possible workshops.
Professionalism: results negative
Food: results negative
French Lifestyle: results negative
Coolness
Luxury
Organisation: results negative
Assortment: results negative
Many of the estimates result negative, but we also have to consider that re-purchase intentions and participation in surveys and co-creative activities with the Galeries are not the same dynamic, and the study demonstrates that they are driven by different elements.
References:
Brown, T. A. (2015). Confirmatory factor analysis for applied research. Guilford publications.
Confirmatory Factor Analysis (CFA) in R with lavaan (ucla.edu)
https://www.sciencedirect.com/science/article/pii/S1071581910001278